Advanced AI programs showing signs of depression after working with humans


Dystopic science fiction stories often write about self-aware Advanced AI programs that turn against humanity. In these stories, the intelligent software is usually wrathful over humanity's treatment of, well, everything.

But what if artificial intelligence didn't become angry over time? What if, just like people, AI simply exhibited signs of depression. As it turns out, that appears to be what's happening to current-gen AI programs.

China’s Advanced AI is horribly depressed

Via Futurism, a study from the Chinese Academy of Science found that multiple AI programs are showcasing signs of depression. Using depression intake surveys designed to find preliminary attributes of depression, the study found evidence that AIs are unhappy.

The study surveyed Blenderbot from Facebook, DiabloGPT from Microsoft, DialoFlow from Tencent and Plato from Baidu. All of the aforementioned programs were found to have “severe mental health issues”.

Not only were the advanced AI found to have very low empathy, but they also exhibited signs of depression and anxiety. Additionally, the programs were all found to show signs of addiction with the software wanting to “drink” away its troubles.

The study explained that the AIs’ low empathy and mental health could have adverse effects on people who engage with the program. They wrote that discussion with the program “may result in negative impacts on users in conversations, especially on minors and people encountered with difficulties.

Read More: Replika players are creating AI girlfriends to abuse and control

Why are AI programs depressed?

There is a common trend between every one of the advanced AI programs that showed signs of depression. Every single program was trained on its own selection of Internet content, including comments from Reddit.

Despite this, there are substantial variations between the individual programs. For example, Facebook and Baidu’s AIs were found to have lower mental health scores than Microsoft and Tencent’s AIs.

Of course, all of these issues are simply part of dataset bias. Recently, “Ethical AI” Delphi was found to be highly controversial as its “ethical solutions” were all based on Reddit comments. As it turns out, maybe Reddit comments aren't the best thing to learn from.

This Article's Topics

Explore new topics and discover content that's right for you!

AINews