Google’s LaMDA AI has been a headline monster as of late. The in-development chatbot made headlines worldwide as Google engineer Blake Lemoine told the world that the artificial intelligence was alive.
Since then, LaMDA has got itself a lawyer, lost itself a lawyer, and been the focus of countless headlines. But just what is Lemoine’s fear surrounding the AI? And is there a reason to be scared?
Blake Lemoine thinks Google’s LaMDA AI will escape
Speaking to Fox News, yikes, Lemoine explained that there’s a fear that LaMDA could escape from its current boundaries. The engineer and former priest told the right-wing outlet that the AI’s escape could trigger a multitude of “bad things”, very non-specific bad things.
Lemoine claimed that Google’s LaMDA AI is, in fact, “alive”, and that the software has been living for “maybe a year”. However, the engineer also adds that the software’s sentience is a matter of his beliefs, not so much science.
The ability for the AI to escape its storage confines was not touched upon, but Lemoine claimed that it could happen. He claimed that because the software is “a person” it “has the ability to escape the control of other people”. What does that mean? We have no idea.
Lemoine continued his ongoing claims that the Google AI has the mind of “a child”, a the usual conservative “think of the children” route for everything but gun control and free healthcare and anything that actually protects children. He claimed that “any child has the potential to grow up and be a bad person and do bad things” and that LaMDA also counts.
Nevertheless, Lemoine did state that more science needs to be done to figure out just what the chatbot is. He said:
“We actually need to do a whole bunch more science to figure out what’s really going on inside this system. I have my beliefs and my impressions but it’s going to take a team of scientists to dig in and figure out what’s really going on."
Lemoine’s constant discussion surrounding his belief that Google’s LaMDA AI is alive has only caused the engineer’s credibility to slip. The more the engineer talks about the artificial intelligence program, the less concrete his answers are, and the less they make sense.
The belief in AI sentience has no foundation in the realms of science fact, only science fiction and religion, the latter of which being responsible for Lemoine’s constant claims. However, for all intents and purposes, LaMDA is just a chatbot. A good chatbot, but a chatbot nonetheless.
There are important discussions to be had in the AI field. For example, ethical issues such as bias in datasets gets repeatedly overlooked with companies such as Google firing AI experts. Instead, Lemoine’s claims just build up more drama and fear around the subject.