New Google video AI won't release due to racist, pornographic and gorey tendencies

share to other networks share to twitter share to facebook

A New Google video AI has been hyped as a next-generation video creation tool. However, the new artificial intelligence program is barred from release due to a host of problematic issues that have yet to be remedied.

Similar to other AI neural net, the new Google video AI suffers from massive data biases. This means that the AI software has a tendency to create content that is racist, sexist, pornographic or even violent.

Advertisement

The Evil Google video AI

Revealed in a research paper about the AI, Google revealed that its “Imagen Video” project is not usable by the public for very good reason. The AI software works by inputting prompts which it then uses to generate a moving image. While the technology itself is remarkable, the issues behind it at more than common.

In the paper, Google revealed that the company cannot release the software in good faith due to “several important safety and ethical challenges”. The research explained that the AI software tends to create content that is “fake, hateful, explicit or harmful”.

Google believes that the software’s uncomfortable issues may be able to be filtered out as development continues. However, until then, the program will not see the light of day via online use or the release of its source code.

Advertisement

“While our internal testing suggests much of explicit and violent content can be filtered out, there still exists social biases and stereotypes which are challenging to detect and filter. We have decided not to release the Imagen Video model or its source code until these concerns are mitigated,” the researchers wrote.

Read More: White House reveals 'blueprint' for AI Bill of Rights to protect humanity

A constant issue

The Google researchers explained that they did attempt to train the video software against the type of data that results in these unsettling results. In the paper, they claimed that specifically trained against “problematic data”, using typical bad information to tell it what not to create.

Advertisement

However, that training was unsuccessful. Despite numerous attempts, Imagen Video still creates unsettling creations more often than it should. (Ideally, it shouldn't create them ever, but a few are bound to slip through the cracks.)

The issue with Imagen Video AI is that it creates moving images that can show horrendous visuals. Unlike still images, these videos can showcase haunting moments in motion, which can be much more traumatising.