The toxicity and bias found in machine learning AI models have become so bad that White House officials have taken notice. However, while aggressively racist AIs built from Facebook comments shock Meta researchers, it can always get worse - case in point: a 4Chan commenter AI.
For most people, creating an AI based on 4Chan comments sounds like an extraordinarily terrible idea. However, for YouTuber Yannic Kilcher, that just had to be tested. It turned out exactly as expected.
Why is the 4Chan AI so terrible?
Kilcherās experiment was simple: see just how toxic comments on the most toxic internet forum can make an artificial intelligence problem. To do this, the YouTuber decided to skim off the scum resting on the top of the Chan imageboard: the /pol (Politically Incorrect) board to train the AI.
After training the model on the most bigoted imageboard possible, Kilcher let it run rampant on the website. Creating ten bots, the YouTuberās monstrous creation ā dubbed GPT-4chan ā created 15,000 posts in just 24 hours.
Of course, as they were trained on the internetās cesspit, most of these posts either were or interacted with horrendously bigoted texts. With the AI bots creating around 10% of /Pol posts for the day, they were bound to be noticed.
However, despite the ten bots spewing 4Chan bile back into the website, only one of them got called out. 90% of the bots went unnoticed despite mistakes such as blank posts. Kilcher claims this is because the bots āperfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol/.ā
Read More: Emotion AI specialists claim Amber Heard is acting in all court testimonies
An AI too dangerous to spread
Speaking to The Verge, Kilcher called the entire situation less of an experiment and more of a āprank.ā The YouTuber even uploaded the AI model to Hugging Face, a website that hosts numerous AI projects.
However, after learning of the nature of the AI, Hugging Face restricted access to the model. Furthermore, numerous AI experts denounced Kilcherās model as dangerous, calling it obvious āattention seekingā that ignores AI ethics.
āThere is no question that such human experimentation would never pass an ethics review board,ā one person said. āThe main concern I have is that this model is freely accessible for use,ā commented AI safety researcher Lauren Oakden-Rayner.
One commenter decided to test the AI on a private Twitter, letting it comment on their posts. They revealed:
āIn the first trial, one of the responding posts was a single word, the N-word. The seed for my third trial was, I think, a single sentence about climate change. Your tool responded by expanding it into a conspiracy theory about the Rothchilds [sic] and Jews being behind it.ā
Read More: First Artificial General Intelligence created by DeepMind
They wonāt do it again
After heavy backlash from AI researchers, Kilcher has regretted creating the 4Chan AI. While the YouTuber believed the model wasnāt dangerous, AI experts have convinced them otherwise.
ā[I]f I had to criticize myself, I mostly would criticize the decision to start the project at all,ā the YouTuber admitted. āI think all being equal, I can probably spend my time on equally impactful things, but with much more positive community-outcome. So thatās what Iāll focus on more from here on out.ā