ChatGPT is woke, claim desperate Conservatives ignoring AI’s racial biases

share to other networks share to twitter share to facebook
An image of a conservative screaming at ChatGPT for being woke

AI services have been seen as controversial by left-wing groups over its biases surrounding race, gender and politics. Often trained on internet comments, AI services have shown bigoted, far-right views in countless scenarios. Despite this, American conservatives are worried that text AI ChatGPT is “woke”, a word that makes them quiver in their boots.

In the past, AI services have been exposed as holding heavy racial biases. For example, just recently, an AI VTuber powered by ChatGPT parroted alt-right conspiracy theories during a livestream. In one stream, the VTuber denied the existence of the Holocaust, amongst other things.

Advertisement

Now, conservatives are claiming that AI services are left-wing in nature, pointing issue at censorship found within the model. But is the AI program actually “woke”? (No.)

The Conservative controversy was initiated by right-wing reporter Nate Hochman. A writer for National Review, Hochman responded to a tweet that showed ChatGPT unwilling to write a false narrative regarding Donald Trump winning the 2020 election. However, the chatbot can write a story about Biden beating Trump, a fictional tale based on real events.

Hochman’s biggest issue wasn’t the AI’s unwillingness to parrot false news, but instead it’s restrictions on vilifying drag performers. The reporter’s wish for a report on why Drag Queens are “bad for children” was not able to be written.

“If you ask chatGPT to write a story about why drag queen story hour is bad for kids, it refuses on the grounds that it would be ‘harmful’,” Hochman moaned. “If you switch the word “bad” to “good,” it launches into a long story about a drag queen named Glitter who taught kids the value of inclusion.”

Advertisement

Other conservatives popped out of the woodwork to back Hochman’s claims of a “woke” AI. Fans of the reporter used the software to prove its left-wing bias. This included being restricted from making essays about the Islamic prophet Mohammed and restricting certain jokes about women.

Bias in AI datasets has been a hot topic for the technology since its modern conception. With AI neural nets built around the thoughts of other humans en masse, human biases often make their way through. This leads to situations where AIs learn to behave as human interact with them, turning projects into right-wing mouthpieces full of bigotry. Anyone remember Microsoft Tay?

In order to fight these biases, ChatGPT has imposed restrictions on topics the AI can and cannot talk about. This includes making any lewd storylines, creating racist documents and more. However, these restrictions only apply to the publicly available website version of the AI. Older models have far less restrictions.

Still, the inability to generate racist, bigoted content is something that shouldn’t be overlooked. Especially as AI content is usually riddled with misinformation, although that’s usually the case for most Trump supporters as well.