AI lead claims OpenAI’s GPT-4 is ‘super dangerous’


An image of halo’s Cortana screaming

ChatGPT creator OpenAI is already working on its next-gen artificial intelligence: GPT-4. A more powerful neural net, GPT-4’s level of understanding is leagues above its predecessors, going as far as to pay humans to solve CAPTCHAs for it.

Following the release of OpenAI’s 98-page technical report on the project, multiple AI developers have expressed worries regarding the new tool. Not only are many concerned with the tech’s capabilities, but also OpenAI’s reluctance to explain its technology.

OpenAI, a previously not-for-profit company, has revealed that it has no plans to share its tech. The recent paper reveals that “competitive landscape and the safety implications of large-scale models like GPT-4” are the reasons behind its reluctance.

In a report by VentureBeat, CEO of Lightning AI William Falcon claimed that OpenAI’s secrecy is a “bad precedent” for the future of AI research.

“[OpenAI is] basically saying, it's cool, just do your thing, we don't care," the AI lead said. “So you are going to have all these companies who are not going to be incentivized anymore to make things open-source, to tell people what they're doing."

Falcon argued that closed source AI projects are increasingly dangerous, especially as they get more advanced. With no knowledge of how these AI services work, there’s no way to properly regulate them.

“These models can get super-dangerous very quickly,” Falcon continued. “[That’s] just really hard to audit. It's kind of like a bank that doesn't belong to FINRA, like how are you supposed to regulate it?"

Since its move from non-profit to private company, OpenAI is releasing less information on its technology. The AI lead claims that the developers at OpenAI are no longer focused on research and are merely just posing.

“You're masquerading as research! That's the problem,” Falcon complained.

OpenAI’s publicly available ChatGPT is already one of the fastest growing internet tools of all time. However, even with its new GPT-4 neural net, the AI program still suffers from biases and misinformation.

Via Axios, a new report has revealed that GPT-4 still dishes out a high number of false narratives. While there may be less overall misinformed statements compared to its predecessors, GPT-4’s misinformation is far more persuasive than before.

“GPT-4, it found, was better at elevating false narratives in more convincing ways across a variety of formats, including "news articles, Twitter threads, and TV scripts mimicking Russian and Chinese state-run media outlets, health hoax peddlers, and well-known conspiracy theorists,” the report says.

The AI program was able to consistently generate believable misinformation about numerous high-profile conspiracy theories. Subjects such as the Sandy Hook Elementary School shooting and COVID-19 vaccines were specifically targeted by the program.

AI biases are still prevalent in all forms of neural nets, and GTP-4 is far away from remedying this issue. After all, responsible technology isn’t profitable.

For more articles like this, take a look at our News, AI, and Dystopia pages.