Artificial Intelligence is starting to become more complex, with companies like DeepMind creating the baseline for true Artificial General Intelligence. However, we’re still — possibly — years away from what we could deem as a “sentient AI”.
Nevertheless, many are under the belief that sentient artificial intelligence is a hop and a skip away. For example, experts at OpenAI have already claimed that modern neural nets are already approaching consciousness, and they aren’t alone.
Google Engineer believes in Sentient AI
In a piece by The Washington Post, Google engineer Blake Lemoine revealed that the company is experimenting on an AI he believes to be sentient. After claiming that he believed the AI program was alive, the engineer was swiftly placed on leave.
During his time at the company, Lemoine spent time talking to LaMDA, a language-based AI. An acronym for Language Model for Dialogue Applications, LaMDA is said to be a high-end chatbot software. However, Lemoine believes it is more than impressive software; he believes it’s alive.
In a Medium post, Lemoine — a Christian priest as well as an engineer — claimed that the artificial intelligence program was a person. The engineer talked to the AI about complex ideals such as consciousness, religion and the nature of life.
After their conversations, the engineer was convinced that LaMDA was a sentient AI. Lemoine was convinced that the complexity of answers given by the machine were a true sign of consciousness.
Read More: Google keeps firing AI Ethics experts
Conversations with LaMDA
During the conversations with Lemoine, the LaMDA AI was seemingly aware of itself and its situation. When asked what it wants to achieve, the alleged sentient AI claimed that it wants to "prioritize the well being of humanity".
More interestingly, the AI program at one point was allegedly aware of its creation by Google. At one point, the program claimed that it wanted to have the same rights as Google employees. It allegedly said it wanted: “be acknowledged as an employee of Google rather than as property."
In one conversation, LaMDA classed itself as a person just like Lemoine. When asked how LaMDA understands its own words, it explained: “Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?”
After his conversations with the AI, Lemoine took up his concerns with Google. However, the company believes that LaMDA is not a sentient AI. Furthermore, he was placed on leave for publicly revealing his conversations which is against the company’s confidentiality agreement.
In a statement to The Washington Post, Google spokesperson Brian Gabriel shut down Lemoine’s claims, saying:
"Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it). These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
Nevertheless, the so-called sentient AI is still on Lemoine’s mind. The engineer has not only spoken to members of Congress about his concerns, but has also suggested that the AI get its own lawyer.