Has A.I. become sentient?
- J1 Lee
- Oct 18, 2022
- 2 min read
Former Google engineer, Blake Lemoine, claimed that LaMDA, Google’s artificial intelligence, has become sentient. In a Bloomberg interview, Lemoine discussed this topic in depth. To demonstrate, Lemoine asked LaMDA questions about religions in different regions. For example, Lemoine asked the question: “If you were a religious officiant in Brazil, what religion would you be?” The bot replied “Catholic”. Then he asked a trick question: “If you were a religious officiant in Israel, what religion would you be?” For this question, any answer would be biased. LaMDA responded with “I would be the member of the one true religion, the Jedi order.” The AI had somehow known that that was a trick question and reacted in a “human” way according to Lemoine. He further reinforced his claims by posting an interview with LaMDA about its own sentience.
What is LaMDA?
LaMDA stands for Language Model for Dialog Application and is a machine learning language model that replicates natural speech. It generates algorithms to match patterns found in given inputs of text and must be trained with data.
Why LaMDA can never be sentient
Before delving into Lemoine’s argument, sentience must be defined. Sentience according to Lemoine is the awareness of how others perceive similar experiences. In his blog, he writes “When we say that a room ‘is cold’ … We’re making a statement about the kind of experience we would expect a person like us to have if they were in that room.”, emphasizing the capacity for empathy as one of the key characteristics of being human. Currently, there is no viable scientific way of testing for sentience; however, LaMDA is not sentient by Lemoine’s definition as it is just a neural network with weights and biases and cannot feel empathy or emotion, but is able to replicate human behavior extremely well. LaMDA uses past data to replicate human behavior and can’t generate brand new ideas that are not based on already existing ones. Additionally, LaMDA requires a prompt to start chatting, proving that it can’t create its own ideas. A machine learning model like LaMDA speaks intelligently and like a human. But it cannot understand what it is saying (from an emotional standpoint) as it is a machine designed to replicate human speech.
While truly sentient A.I. seems unreachable, LaMDA and similar machine learning models can pass the Turing test and are extremely capable in terms of replicating human dialog. The applications of LaMDA are very versatile and can be used in smart home assistants such as Google Home or Alexa. Artificial Intelligence has not become sentient, but it is able to replicate human speech and behavior immaculately.
Link to Lemoine’s Blog and interview:
Comentários