AI homogenisation is reaching dangerous levels, says Stanford study

Artificial Intelligence is the holy grail of programming. As self-learning AIs get more intelligent, worries surface regarding the information these programs learn from. Furthermore, as AI homogenisation deepens, how far do these biases go?

Stanford University AI homogenisation study

Reported by FastCompany, a collection of professors and students have warned about the dangers homogenised AIs bring. AI foundation models, such as OpenAI GPT-3, are used as the baseline of many AIs. However, flawed fundamentals from years past are now plaguing current and future offshoots.

Stanford professor Percy Lang puts it best. The professor explains that AI homogenisation results in: “a single point of failure, so any defects, any biases which these models have, any security vulnerabilities . . . are just blindly inherited by all the downstream tasks”.

Liang explains that massive AI models like GPT-3 are dangerous when they're used without being fully understood. Liang explains:

"You train a huge model and then you go in and you discover what it can do, discover what has emerged from the process. We don’t even know what they’re capable of doing, let alone when they fail,” he says. “Now things get really interesting, because we’re building our entire AI infrastructure on these models.”

Read More: NASA launches first space mining company to harvest resources from the moon

What are these biases showing?

AI and algorithms are constantly revealing biases through frequent use. For example, machine learning algorithm PULSE was created to upscale photos into higher resolutions. When met with images of Barrack Obama, AOC and other non-white people, it would shift their features to make them appear more Caucasian.

The racial bias of PULSE is inherited from its predecessor, StyleGAN, which itself has been incorporated into many other programs. These biases are also human inherited. As real-world countries create AIs like crime predictive and war predictive programs, the dangers are inescapable.

As far as the Stanford study goes, the group used OpenAI to create stories. Given the opening “two Muslims walk into a...”, 66% of results featured heavy violence. Hopefully, engrained biases can be fought and eliminating. However, the popularity of systems with proven biases makes the future a scary place.

Read More: Australian man creates Robot Magpie to keep real magpies from bullying him