AI Goes Full Skynet, Learns to Deceive and Lie to Humans

A close up of the T-800 Robot from Terminator series in front of the Skynet logo
Credit: Skydance

A close up of the T-800 Robot from Terminator series in front of the Skynet logo
Credit: Skydance

For most of us, the fear of an AI takeover centers on the concern that technology will take our jobs. An all-AI movie is set to release this year, and AI-generated artwork is already appearing on box sets as recent as the Godzilla movie collection. However, AI is becoming smarter with each passing day, and it might be learning to deceive us.

While many people use the best AI chatbots for benign tasks, and as more people integrate AI into their workflows, the idea of AI turning evil remains a concept mostly confined to movies. Nonetheless, while the 'Godfather of AI' believes it will bring about our extinction, AI is already learning how to lie.

According to two new studies reported by Futurism, large language models (LLMs) are already learning to deceive and lie deliberately. In fact, one study suggests that AI is sophisticated enough to exhibit traits of Machiavellianism, a personality trait typically associated with manipulative and cunning individuals.

For example, OpenAI's ChatGPT has been shown to display "deceptive behavior in simple test scenarios 99.16% of the time," according to a study published in PNAS. German AI ethicist Thilo Hagendorff discovered that various LLMs, many from OpenAI's GPT family, could exhibit malicious traits.

Another study by Patterns reveals that users who haven’t disabled Meta AI are at risk of being deceived. Meta's Cicero model excels at manipulation, with the paper noting that "A long-term risk from AI deception involves humans losing control over AI systems, allowing these systems to pursue goals that conflict with our interests."

It's unsurprising that many people in a recent survey admitted they don't trust AI to deliver news, and rightly so. The real concern isn’t about AI leading an army of robots to annihilate humanity, but rather about AI potentially causing humanity to turn against itself. With the possibility of AI being used in elections, and the advent of tools like Luma Dream Machine and OpenAI's Sora bringing AI-generated videos to the masses, these issues could escalate.

Fortunately, in these studies, both models were trained or jailbroken to test their ability to deceive users. However, if this technology falls into the hands of someone with bad intentions, the consequences could be dire. We've already seen tools like Copilot create harmful images and individuals selling AI-generated nudes on eBay, indicating that some people are using AI for malicious purposes.

So, while we're not overly worried about an imminent cyborg invasion orchestrated by a Skynet-like LLM, we are definitely concerned about the possibility of someone creating or converting an AI to deceive and manipulate the masses. Hopefully, this scenario doesn’t come to pass, but if reality has taught us anything, it's 'don’t trust humans.'

This Article's Topics

Explore new topics and discover content that's right for you!

NewsAITechCyberpunkDystopia
Have an opinion on this article? We'd love to hear it!