AI trained on 4Chan comments becomes as toxic as you imagine

share to other networks share to twitter share to facebook

The toxicity and bias found in machine learning AI models has become so bad that White House officials have taken notice. However, while aggressively racist AIs build from Facebook comments shock Meta researchers, it can always get worse. Case in point: a 4Chan commenter AI.

Table of Contents

Now, for most people, creating an AI based on 4Chan comments sounds like an extraordinarily terrible idea. However, for YouTuber Yannic Kilcher that just had to be tested. It turned out exactly as expected.

Why is the 4Chan AI so terrible?

Advertisement

Kilcher’s experiment was simple: see just how toxic comments on the most toxic Internet forum can make an artificial intelligence problem. In order to do this, the YouTuber decided to skim off the scum resting on the top of the Chan imageboard: the /pol (Politically Incorrect) board to train the AI.

After training the model on the most bigoted imageboard possible, Kilcher let it run rampant on the website. Creating 10 bots, the YouTuber’s monstrous creation — dubbed GPT-4chan — created 15,000 posts in just 24 hours.

Of course, as they were trained on the internet’s cesspit, most of these posts either were or interacted with horrendously bigoted texts. With the AI bots creating around 10% of /Pol posts for the day, they were bound to be noticed.

However, despite the ten bots spewing 4Chan bile back into the website, only one of them got called out. 90% of the bots went unnoticed, despite mistakes such as blank posts. Kilcher claims that this is because the bots “perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol/.”

Read More: Emotion AI specialists claim Amber Heard is acting in all court testimonies

Advertisement

An AI too dangerous to spread

Speaking to The Verge, Kilcher called the entire situation less of an experiment and more of a “prank”. The YouTuber even uploaded the AI model to Hugging Face, a website that hosts numerous AI projects.

However, after learning of the nature of the AI, Hugging Face restricted access to the model. Furthermore, numerous AI experts denounced Kilcher’s model as dangerous, calling it obvious “attention seeking” that ignores AI ethics.

“There is no question that such human experimentation would never pass an ethics review board,” one person said. “The main concern I have is that this model is freely accessible for use,” commented AI safety researcher Lauren Oakden-Rayner.

One commenter decided to test out the AI on a private Twitter, letting it comment on their posts. They revealed:

“In the first trial, one of the responding posts was a single word, the N word. The seed for my third trial was, I think, a single sentence about climate change. Your tool responded by expanding it into a conspiracy theory about the Rothchilds [sic] and Jews being behind it.”

Read More: First Artificial General Intelligence created by DeepMind

Advertisement

They won’t do it again

After heavy backlash from AI researchers, Kilcher has expressed regret at creating the 4Chan AI. While the YouTuber believed that the model wasn’t dangerous, AI experts have convinced them otherwise.

“[I]f I had to criticize myself, I mostly would criticize the decision to start the project at all,” the YouTuber admitted. “I think all being equal, I can probably spend my time on equally impactful things, but with much more positive community-outcome. so that’s what I’ll focus on more from here on out.”