Google AI study reveals truly dangerous, careless AI development


Out of all of the tech giants right now, Google is one of the most focuses on artificial intelligence. Despite its prominence in AI development, Google AI studies are proving time and time again to be dangerous and careless.

From the firing of AI ethics lead Timnit Gebru and others to dataset biases, it’s been an uncomfortable reality at the company. However, a new study reveals just how bad that carelessness truly is.

Google AI research is dangerous

Revealed in an article by TNW, a recent Google AI study proves just how careless the company is. In recent years, a lot of discussion has occurred surrounding the dangers of AI, particularly in dataset biases, and it would appear that Google has not listened to those pleas.

In the study, research surround an emotion AI program was deemed as dangerous. Emotion AIs have been a highly controversial topic as of late with companies like Zoom planning to introduce emotion analysers into classrooms. Companies like Microsoft have already started to block emotion AI research, citing the software as dangerous.

However, Google is still working on Emotion AI. Recently, the company created a ready-to-train-on “fine-grained emotion dataset” called GEO-motions. Not only was the AI trained off Reddit comments, typically not a good idea, but it was also outsourced to another company that failed to properly label data.

The study reveals that the Google AI was outsourced for data labelling. Due to carelessness, a large amount of this dataset was mislabelled, leading to a more biased and inaccurate AI program.

“A whopping 30% of the dataset is severely mislabeled!” The study reads. “We tried training a model on the dataset ourselves, but noticed deep quality issues. So we took 1000 random comments, asked Surgers whether the original emotion was reasonably accurate, and found strong errors in 308 of them.”

Read More: Google’s LaMDA AI hired a lawyer, and immediately lost it

Carelessness equals mistakes

Google AI are typically seen as some of the best AIs in the business. This is due to how much power Google has on hand, allowing the company to train AIs faster than any other company today.

However, in the company’s bid to make every AI possible as fast as possible, it is consistently failing more and more to take the proper caution. Multiple Google employees have discussed this at length inside the company, usually resulting in their firing.

However, as AIs get more advanced and more useful, they can become incredibly dangerous if they are not crafted with care to strike out biases and mistakes. After all, they may be self learning programs, but they are learning from human mistakes, and they’re being taught to double down.

While engineers like Blake Lemoine may claim that some AIs like LaMDA have emotion, they do not. Everything is a calculation from a human source of data, and that data is cold. This is just one of the reasons why emotion AI is dangerous in the first place, but emotion AI that’s incorrectly labelled? That’s just disaster.

This Article's Topics

Explore new topics and discover content that's right for you!

AI