As recent studies prove that AI faces are more trustworthy than real faces, many have been worried that the technology will be used for wrongdoing. As it turns out, computer generated faces are already being used to manipulate others, as humanity cannot be trusted with even a little bit of power.
AI faces trick LinkedIn users
Reported by The Register, Stanford Internet Observatory researchers Renée DiResta and Josh Goldstein have looked into the massive slew of AI profiles on employment social media platform LinkedIn.
As online users start to learn how to avoid the hordes of sales and recruitment bots on the platform, some companies have turned to artificial intelligence. Now, companies are creating profiles with uniquely generated AI faces to trick people into purchasing software.
The investigation started after DiResta was messaged by an AI LinkedIn profile. DiResta noticed minor discrepancies in the photo — such as the profile only having one earing and tiny bits of hair looking just a bit too blurry on the edges. This led the researchers to look deeper into the trend.
As it turns out, a preliminary look into the trend found 70 different companies using AI faces on LinkedIn. However, a sizable number of those companies had no idea their products were sold using AI. Those companies were using externally hired marketer firms.
Why is this an issue?
Simple algorithms and bots have been used for well over a decade to trick and scam people, and that form of manipulation is only continuing with AI faces. With old bot profiles, faces were simply stock photos or social media profiles that could be reverse image searched. Now, never-before-seen faces can be generated in minutes.
As these faces are created from scratch every time, source images can't help people identify fakes. Additionally, since most people can't tell the difference between AI faces and real faces, more people are expected to be tricked than ever before.
Artificial intelligence has a lot of ethical issues surrounding it. However, most of those issues only arise due to the ways in which humans interact with AI. For example, being able to generate AI faces is not an issue in itself, but when humans use them for nefarious purposes, the blame lands on the technology.