Google was begged to test if LaMDA was sentient, it refused

share to other networks share to twitter share to facebook

Artificial Intelligence reaching sentience is a theory often touched on in science fiction, but what happens when fiction turns real? That was on the minds of everyone earlier this year when a Google engineer claimed the company’s AI chatbot LaMDA was alive. But what did the company think about this claim?

Google refused sentience test

Ex-Google AI engineer Blake Lemoine was fired for leaking transcripts of his conversations with the company’s LaMDA AI. Despite being let go for discussing the AI and breaking NDA, the ex-engineer is still talking about the software.

Advertisement

In an interview with The Guardian, Lemoine explains the he urged Google to run sentience tests in the AI. The engineer explained that the company refused to run experimental tests on the AI. The company still refuses Lemoine’s claims of sentience.

Lemoine also explained that he believes the public should’ve been aware of the technology. Furthermore, the engineer is of the opinion that the public should be allowed to judge whether or not the AI is sentient. The engineer said:

"I raised this as a concern about the degree to which power is being centralized in the hands of a few, and powerful AI technology which will influence people's lives is being held behind closed doors. There is this major technology that has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed."

Read More: Police won’t abuse crime prediction AI, says crime prediction AI creator

Is there any possibility that LaMDA is sentient?

Lemoine’s fears that Google had created a sentient artificial intelligence program was not only his. After releasing an edited transcript, Lemoine was able to rally the public to his side. However, there’s still no proof that the AI is sentient.

Furthermore, the engineer’s proposal of a sentience test for LaMDA would require such test to exist. Alongiside that, whoever created the test would have to have a concrete definition of what is sentient and what isn’t. This is a highly debated topic already.

Creating a sentient computer is an issue, but it is most certainly an issue of the future. LaMDA is not sentient, neither is Meta’s BlenderBot, neither was Microsoft Tay.