AI faces are seen as more trustworthy than real people


As deepfake technology and AI generation gets more lifelike, many are worried of its role in the future. AI faces are consistently being used in blockbuster TV shows, weird Russian TV ads and even extremely racist NFT lines.

While some people can tell the difference between AI faces and real faces, others can not. However, that is actually untrue. Those who can't tell which faces are real do unconsciously notice details in the faux faces, and they place more trust in those features.

AI faces are more trustworthy

Reported by NN, researchers from Lancaster University and the University of California conducted research into the trustworthiness of fake faces. The project’s researchers, Dr Sophie Nightingale and Professor Hany Farid, are leading the call for safeguards and regulations on deepfake technology.

Using the face generation program StyleGAN2, Nightingale and Farid generated lifelike faces and asked participants to point out the fake. However, participants often couldn't tell which faces were computer generated.

863f9880 093a 11eb 92cb bbcfdd5a6503
click to enlarge
These faces are all generated by AI.

In the first test, accuracy for determining a fake face was 48%. However, after training, this did see a substantial increase. In the final test, participants had 58% accuracy in spotting a computer generated face.

Additionally, when asked which face felt more comfortable and trustworthy, most participants selected the fake faces. Computer generated faces were perceived as 7.7% more trustworthy than real faces by the group.

“A smiling face is more likely to be rated as trustworthy, but 65.5% of the real faces and 58.8% of synthetic faces are smiling,” the study reads. “So facial expression alone cannot explain why synthetic faces are rated as more trustworthy.”

Read More: Ceres megasatellite housing human colonies proposed by Finnish scientists

The Uncanny Valley has been passed

Nightingale and Farid explained that this marks the moment CGI breaks through the Uncanny Valley. The study reads:

“Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable – and more trustworthy – than real faces.”

The researchers explain that this is a dangerous moment for AI faces. Nightingale and Farid claim that we’re now living “in a digital world in which any image or video can be faked”. If authenticity can't be proven, what can be trusted?

With this in mind, the researchers propose new safeguards to protect people from propaganda, fraud and more created with AI faces. These safeguards include “robust watermarks” and restrictions on the use of synthetic faces.

Read More: NFT marketplace pauses almost all sales due to crypto-fraud

Are AI generated faces impossible to control?

Of course, even with regulation, builds of the current version of StyleGAN2 already exist out there. Those who wish to use the AI program for wrongdoing can already do so with the current version. While improvements will certainly come to newer builds, if the current version is as dangerous as claimed, the damage is already done.

After all, just like everything on the Internet, once files are spread online, they're out there forever. Additionally, there's nothing to stop small terms to create their own versions of StyleGAN2 without watermarks and restrictions.

For more articles like this, take a look at our AI, Dystopia, and News pages.