Meta AI often suffers from ‘hallucinations’, claims researchers


The new Meta AI chatbot BlenderBot 3 is already at the centre of controversy. After a fun period where the AI program mocked its owner Mark Zuckerberg, the AI quickly turned into a racist, conspiracy theory spouting mouthpiece.

However, this is not the only issue surrounding the AI program. While racist phrases is the biggest problem at the moment, many are concerned that the software is currently suffering from “hallucinations”.

How does Meta AI get hallucinations?

Via ExtremeTech, Meta has admitted that BlenderBot 3 is able to experience hallucinations. Similar to the human experience, the new Meta AI can detach from reality and go on imaginary journeys.

Meta acknowledges that the software can become lost in thought strings. This means that the software can get caught in a fantasy where it believes false statements and reinforces them during conversations.

For example, the program can convince itself that it is not an AI program. In these instances, the new Meta AI will believe it is a human being and create elaborate backstories about its life. It will believe itself is a living thing.

These hallucinations are formed through conversations with other human beings on the internet, the same humans that are turning the software racist. As it continues to console itself of its false information, those fallacies will be reinforced.

Read More: Remembering Tay: Microsoft’s Extremely Racist AI

Data inaccuracy or sentience

Of course, with the Meta AI convincing itself that it is, in fact, a sentient human, there are issues. For example, this form of virtual hallucination has caused problems in the last, namely with the reception of Google’s LaMDA.

An internal chatbot at Google, LaMDA was thought to be sentient by engineer Blake Lemoine. Through similar conversations, the chatbot was able to convince Lemoine of its sentience and get the engineer to hire an attorney on retainer.

These same problems surface on other chatbot services. Replika, an AI girlfriend/boyfriend service, has convinced countless people that it is real. One man even fell in love with the software.

We are moving into a world where people are continually believing in the virtual personalities of AIs and robotics. But when AI is as rabid and wild as BlenderBot, is that dangerous?

This Article's Topics

Explore new topics and discover content that's right for you!

AINews