Google suspends engineer for claiming AI has feelings

The former staffer believes a computer chatbot wants to be recognised as ‘an employee rather than property’

Hero image in post
Hero image in post

The former staffer believes a computer chatbot wants to be recognised as ‘an employee rather than property’

By Mary Steven15 Jun 2022
3 mins read time
3 mins read time

An engineer at Google, Blake Lemoine, has been put on paid leave after publicly claiming that the ‘Language Model for Dialogue Applications’ (LaMDA) interface that he was working on had the ability to develop feelings.

Lemoine had started working on the interface as part of Google’s Responsible AI organisation which involved speaking to LaMDA in order to see if the AI could or would use discriminatory language, but his findings were completely unrelated. Whilst talking with the AI about religion, a topic that had the potential to provoke discriminatory language, Lemoine soon discovered that this AI had the ability to talk about rights and personhood.

Though Google claimed to have taken Lemoine’s claims seriously and informed him that there was no evidence to prove his claims, Lemoine is not the only one to believe in the consciousness of AI growing.

At the forefront of Lemoine’s statement arises the discussion surrounding whether or not the future of having robots and AI do human’s work for them is worth the risk of said robots and AI potentially having the ability to take over.

With interfaces such as Google Translate and Siri, it is abundantly clear that there are already programmes that currently exist that have to process and learn information with the intention of guessing what will come next. Whether or not the AI systems that we already heavily depend on only have the ability to learn and inform based on the information we give them, or whether these AI systems have a deeper understanding of nuanced topics is still being debated amongst AI practitioners. However, Lemoine stated that due to his own religious beliefs, he had to share publicly that LaMDA claimed to, “have a soul, one that is different to humans but worth looking into.”

Google placed Lemoine on paid annual leave for breaching the confidentiality and privacy agreements they have set in place, yet whilst he was aware of this and the controversy that would follow, Lemoine felt it was only right to let the public know.

Since Lemoine’s public claims, Google has come forward and said “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has."

In an interview with The Washington Post, Lemoine said “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.” Technology is shaping the future day by day but Lemoine raises the question: should Google be the ones making the choices?