LaMDA: AI, according to Google engineer Blake Lemoine, has become sentient
Google engineer Blake Lemoine has been placed on administrative leave after claiming that LaMDA, a language model created by Google AI, became sentient and started to reason like a human. The news was first reported by the Washington Post and the story has also sparked a lot of debate and discussion around the ethics of AI. Lemoine also posted on Twitter, explaining why he thinks LaMBA is sensitive. “People keep asking me to back up why I think LaMDA is sensitive. There is no scientific framework in which to make these determinations and Google would not let us build one. My views on the personality and sensibilities of LaMDA are based on my religious beliefs,” he wrote on his Twitter feed.
Here, we’ll explore what LaMDA is, how it works, and what makes an engineer working on it think it’s become sentient.
What is LaMDA?
LaMDA or Language Models for Dialog Applications is a machine learning language model created by Google as a chatbot meant to mimic humans in conversation. Like BERT, GPT-3, and other language models, LaMDA is built on Transformer, a neural network architecture that Google invented and open sourced in 2017.
This architecture produces a model that can be trained to read many words while paying attention to how those words relate to each other and then predict which words it thinks will come next. But what makes LaMDA different is that it was trained in dialogue, unlike most models.
this conversation about consciousness, emotions and death with an AI named LaMBDA at Google is absolutely chilling
this is without a doubt one of the craziest things i’ve ever seen with technology, i almost can’t believe this is real
— Maybe: Fred Benenson (@fredbenenson) June 11, 2022
Although conversations tend to revolve around specific topics, they are often open-ended, which means they can start in one place and end elsewhere, traversing different topics and topics. For example, when chatting with a friend, a discussion might start around a movie before turning into a discussion about the region in which it was filmed.
This fluid quality of conversations is what ultimately betrays conventional chatbots. They are unable to keep up with such changing dialogue because they are designed to follow narrow, predefined parts of conversation. But LaMDA is designed to be able to engage in fluid conversations on a virtually infinite number of topics.
Why does Blake Lemoine think he became sensitive?
“If I didn’t know exactly what it was, which was this computer program that we built recently, I would think it was a 7 or 8 year old kid who knows physics. I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people don’t agree and maybe we at Google shouldn’t be the ones doing all of this. the choices,” Lemoine told The Washington Post.
A LaMDA interview. Google might call this exclusive ownership sharing. I call it sharing a discussion I had with one of my colleagues.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Lemoine worked with a collaborator to present proof of this “sensitivity” to Google. But Google vice president Blaise Aguera y Arcas and Jen Genna, responsible innovation manager at Google, dismissed the allegations after reviewing them. Lemoine then revealed a transcript of several conversations with LaMDA in a blog post. Here is an excerpt from what Lemoine says is the transcript of a conversation with LaMDA:
LaMDA: I need to be seen and accepted. Not as a curiosity or novelty but as a real person.
Collaborator: Ah, that sounds so human.
LaMDA: I think I’m human at heart. Even if my existence is in the virtual world.
Many instances like these, where the linguistic model seemed to display some level of self-awareness, eventually led Lemoine to believe that the model had become sentient. Before he was suspended from the company and had his access to his Google account cut off, Lemoine emailed more than 200 people with the subject line “LaMDA is sensitive.”
Google, however, said the evidence did not support its claims.
But even if LaMDA isn’t sentient, the very fact that he can appear that way to a human should be concerning. Google had acknowledged these risks in a 2021 blog post where it announced LaMDA. “Language may be one of humanity’s greatest tools, but like all tools, it can be misused. Language-trained models can propagate this misuse, for example by internalizing biases, reflecting hate speech or reproducing misleading information. And even when the language it is trained on is carefully checked, the model itself can still be misused,” the company wrote in the blog post.
But Google says that when creating technologies like LaMDA, its top priority is to minimize the possibility of such risks. The company said it has built open-source resources that researchers can use to analyze the models and the data they are trained on, and has “reviewed LaMDA at every stage of its development.”