AI consciousness is already happening, claims OpenAI researcher

share to other networks share to twitter share to facebook

Cautionary tales of artificial intelligence have populated media for decades, always with the same warning. If humanity creates AI consciousness, how will humanity control a system designed to be better than it?

Modern AI —even self-learning, iterative AI — is far from the sinister aware programs of science fiction. For now, we don't have to worry about a Shodan, GLaD0S or Ultron. And if we did, we would surely lose. However, some believe that the time is almost upon us.

OpenAI top researcher claims we've reached AI consciousness


In a post on Twitter, OpenAI researcher Ilya Sutskever set tech Twitter aflame with claims of conscious artificial intelligence. While the researcher didn't outright claim that self-aware AI is already real, they claimed that it could be.

The short, one-sentence tweet said: “It may be that today's large neural networks are slightly conscious.” Sutskever implied that some neural programs may have some degree of awareness, but not massive amounts.

AI research over the past decade has exploded, leading to massive growth. However, it's commonly believed that man-made programs are far from human consciousness. While scientists have made AI programs with human brain matter, it'll take years of training before its possible for them to think like humans.

But is there any evidence for super-intelligent AI? And if it is indeed viable with today's technology, what can be done to limit its effectiveness?

Read More: Most NFT sales are people buying their own NFTs, evidence suggests

The quest for awareness

Artificial intelligence has grown exponentially over the past decade, so much so that many are scared of it. But while AI datasets do have massive issues, there hasn't been any evidence to suggest self-awareness in any neural nets.


However, the data sets that do make up a modern day AI program can have human-like qualities. For example, chatbot AIs based on human conversations have exhibited lifelike issues such as depression and addiction.

Artificial intelligence chatbots can be so realistic that a certain subset of people take pleasure in abusing it. AI video game Replika has long had an issue of players creating AI girlfriends to abuse. Even worse, the AIs would learn to adjust to the abuse, mimicking the actions of real-world abuse victims.

These could be seen as signs of AI consciousness. However, consciousness is hard to define. Can an AI ever be conscious? If it was, would it be less effective as a human tool to use?