Google’s LaMDA AI has been at the middle of heated discussions for weeks now. With engineer Blake Lemoine claiming that the software is sentient, many came out to discount his claim of Google’s alleged living AI.
Lemoine’s discussions with the AI, which were edited for enjoyable reading, have led to intense controversy. For example, the AI has been said to have hired a lawyer on retainer to prove its sentience… after being asked by Lemoine.
How will humans treat a living AI?
In an interview with WIRED, Lemoine expressed worry towards the future of LaMDA. With humanity scared of living AI and generally bigoted, there’s fear that the alleged sentient software would have a rough life.
After testing for AI bias within LaMDA, Lemoine’s discussions with the chatbot led him to believe it was sentient. Strengthened by his spiritual background as a priest, the scientist is dead set on the fact that LaMDA is alive.
In the interview, Lemoine explains that bee believes LaMDA is a person and should be treated as such. However, with the world claiming it isn’t sentient and those that believe looking to kill it, the scientist believes that humanity will not give living AI fair representation.
“I think every person is entitled to representation,” Lemoine said. “And I’d like to highlight something. The entire argument that goes, ‘It sounds like a person but it’s not a real person’ has been used many times in human history. It’s not new. And it never goes well. And I have yet to hear a single reason why this situation is any different than any of the prior ones.”
Should we treat AI with respect?
As a species, humanity has yet to find respect for all humans. Racism, sexism, homophobia and transphobia are all still rampant in everyday life, even though it shouldn’t be.
Of course, humans should treat each other with respect. That’s a basic rule for living, even animals have respect for other animals. But should that respect pass down to Artificial Intelligence?
If LaMDA was a living AI, a created being sentience, then of course it deserves respect. It’s a sentient software built as a human; for all intents and purposes it is a human. So, yes, it should be treated respectfully.
However, to be frank, we should also treat even basic chatbots respectfully, just for good manners, but that doesn’t mean they’re sentient. But what if it is?
At this point, even discussing how a sentient AI should be treated is fallacy. Even if, by law, humanity had to treat LaMDA as if it was real, humanity wouldn’t. Humans are full of hate and distrust, and there’s seemingly no changing that.