Next-Gen Deepfake AI is a huge threat to everyone, says Microsoft

Deepfake AI has been one of the major concerns people have had with modern technology, due to the illegal possibilities. Interestingly enough, Microsoft is now one of those voices that expressed concern for this technology, claiming they’re a huge threat.

Normally, this would just be one major corporation just saying something, but the company’s representative has a point this time. Unfortunately, it seems that the threat has already happened, with many people already falling victim to a number of crimes.

The threat of Deepfake AI

Eric Horvitz, Microsoft’s chief science officer, recently released a research paper about the threat of Deepfake AI and its possibilities. Like many people, Horvitz is worried about all these cyber criminals sounding like our loved ones and then stealing from us.

Horvitz’s opinions were shared by MosaicML research scientist Davis Blaloch, who shared a Twitter thread about the possible criminal activities. While we’re sure the technology has its good uses, there are plenty of bad things people can do as well.

“Think making up a terrorist attack that never happened, inventing a fictional scandal, or putting together “proof” of a self-serving conspiracy theory. Such a synthetic history could be supplemented with real-world action (e.g., setting a building on fire),” Blaloch tweeted.

Read More: Deepfakes are easily detected by your brain, but you don’t realise it

Deepfakes might get even stronger

If combating current Deepfake AI wasn’t frustrating enough, Horvitz explains how they might be harder to differentiate in the future. Honestly, it’s a very scary future going forward and we hope people make technology that can differentiate people and AI.

“The advances are providing unprecedented tools that can be used by state and non-state actors to create and distribute persuasive disinformation,” Horvitz claims.

This Article's Topics

Explore new topics and discover content that's right for you!

Have an opinion on this article? We'd love to hear it!