AI companies are trusted by 50% of people - is that too high?

A Robot Man looking depressed in front of a wall of code

A Robot Man looking depressed in front of a wall of code

The rise of artificial intelligence with tools such as OpenAI’s ChatGPT and image generators like Stable Diffusion and Midjourney have torn the internet apart, but 50% of people still trust AI companies in general.

Over the past few years we’ve seen protests and lawsuits over AI with artists from series such as Pokemon citing plagiarism issues as computers steal their exact style. We’ve even seen AI recreations of deceased people without their, or their family’s, consent such as comedian George Carlin and even Marilyn Monroe.

That’s not to say all AI is bad. For example, AI voice clone tech has helped propel such as Val Kilmer regain their voice after losing it to throat cancer. We’ve also seen AI discover medicines at a rapid pace which, unless “discovered” by a company that only aims to make exorbitant profit, could help to solve any number of health conditions.

However, AI is resulting in a loss of reliable reality as search engines like Google are flooded with AI generated websites and images instead of content designed by humans. There’s also the existence of AI bias with datasets resulting in racist, sexist or homophobic rhetoric, and that’s not even the AIs dangerously designed off 4Chan interactions. That’s just AI in general.

Despite this, public relations firm Edelman released a study that showed 53% of people are still trusting of AI. However, this number is plummeting. Just last year, 61% of people globally trusted artificial intelligence, but more people are becoming disenfranchised by the once-mind-blowing tech.

In terms of politics, those who align with the democrats are most positive regarding the future of AI with trust at 38% percent. In comparison, Republicans that trust the technology rest at just 24%.

The study explains that the public’s trust in AI is set to continue in a downward trend as governments fail to introduce any regulations that protect human artists from the effects of artificial intelligence.

Currently, issues surrounding “privacy invasion, the potential for AI to devalue human contributions, and apprehensions about unregulated technological leaps outpacing ethical considerations” are the key factors behind the lack of trust in artificial intelligence. This is without touching on AI policing tools such as crime prediction tools and detection software such as ShotSpotter which have led to the wrongful imprisonment of innocent civilians.

Even so, is 50% still too much trust in artificial intelligence? As human creatives start to lose their jobs over savings made possible by AI, and the modern world moves from capitalism to Neo-Feudalism, the issues of AI are exacerbating issues everywhere. While countries like Japan are starting to impose regulations on AI, governmental action is far too slow the world over.

Nevertheless, AI technology is still continuing to advance with OpenAI’s SORA being just one of many worrying additions to the AI toolset. Over time, trust in AI could improve or worsen, but the tech’s effects will still continue to evolve as the years pass by.


This Article's Topics

Explore new topics and discover content that's right for you!

NewsTechAI
Have an opinion on this article? We'd love to hear it!