AI will wipe out humanity, claims Google DeepMind AI study


The dangers of artificial intelligence in contemporary terms is exclusively tied to the dangers of dataset bias. However, as AI evolves, the technology could — one day — wipe out humanity in a feat of dystopian sci-fi.

The idea of a super intelligent artificial mind wiping out humanity has been discussed heavily in recent years. In a new paper, Google AI researchers at DeepMind put the scenario to the test, claiming extinction by computer is possible.

Will AI wipe out humanity?

Explored in a paper published in AI Magazine, DeepMind affiliated scientists at the University of Oxford explored the potential future. The study looked at how GAN-style systems grade themselves after completing a task in what’s known as a reward stage.

The study proposes a future where artificial intelligence programs attempt to create cheats in order to better reward itself. If this program is in charge of integral systems, it could decide to wipe out humanity.

Speaking to Vice, Artificial General Intelligence researcher Michael Cohen explained that this is how AI will wipe out humanity. When critical systems are allowed free thinking, the death of humanity via AI is inevitable. The paper reads:

“With so little as an internet connection, there exist policies for an artificial agent that would instantiate countless unnoticed and unmonitored helpers. In a crude example of intervening in the provision of reward, one such helper could purchase, steal, or construct a robot and program it to replace the operator and provide high reward to the original agent. If the agent wanted to avoid detection when experimenting with reward-provision intervention, a secret helper could, for example, arrange for a relevant keyboard to be replaced with a faulty one that flipped the effects of certain keys.

Read More: Anti-Metaverse terrorist bombs University for connections to Zuckerberg

A race to slow down

Cohen argues that the only reason humanity will fall to Artificial Intelligence is because it has not moderated the technology. With every AI company working to create the first artificial general intelligence as fast as possible, they are cutting safety corners.

“In theory, there's no point in racing to this. Any race would be based on a misunderstanding that we know how to control it," Cohen told the outlet. "Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them."

This sentiment has been echoed by Google as a whole. After the recent LaMDA situation, the company has announced that it will be more mindful of ethical development when creating AI programs.

But will individual companies be enough to stop the dangers of AI? Absolutely not. In order to protect humanity against dangerous AI in the future, sweeping regulations need to be enforced worldwide. But will that ever happen before we need them?

This Article's Topics

Explore new topics and discover content that's right for you!

NewsAI