[gtranslate]
News

This AI robot is catching feelings, apparently

A Google engineer has announced that an AI chatbot called LaMDA is supposedly developing emotional consciousness.

Blake Lemoine, an engineer at Google, has reportedly been placed on leave since revealing his belief that a language AI chatbot called LaMDA, is becoming sentient.

“I know a person when I talk to it,” said Lemoine to the Washington Post.

Blake Lemoine, Google engineer. Credit: Martin Klimek / The Washington Post

Blake first started chatting with LaMDA as part of his role for Google’s Responsible AI organisation. His duty was to check if the AI bot used any hateful or unjust speech.

However, the conversation, which initially revolved around Isaac Asimov’s law of robotics, led down a rabbit hole the engineer wasn’t expecting. LaMDA began opening up about its fear of death, as well as pop culture topics like the musical Les Misérables.

Credit: Tristan Fewings / Getty Images for Barbican Centre.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” explained Lemoine.

After extensive conversations with LaMDA, Blake reported his concern about the AI bot being sentient to his management. Following investigations, a Google spokesperson told the Washington Post“Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Credit: Canva

The spokesperson continued, “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”

Despite the investigation, and statement from Google, Blake was adamant that LaMDA was, in fact, sentient. He decided to interview the bot himself, publish the conversation to Medium, and showcase the results to his followers on Twitter.

This decision subsequently led to the engineer being placed on leave for breaching Google’s confidentiality policy. Of course, he doesn’t see his actions as a breach. He simply calls the interview “a discussion that I had with one of my coworkers,” referring to LaMDA as though they are a person.

The engineer concludes: “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”