- Blake Lemoine has posted some conversations he had with LaMDA, the artificial intelligence he describes as a person
- Google said the evidence it provided does not support the claim that lambda is conscious
- More such information can be found on the home page of Onet.pl
Blake Lemoine told The Washington Post that he began speaking to the LaMDA (Model Language for Dialog Applications) interface last fall as part of his work with Google Responsible AI.
last year Google describes LaMDA as an “advanced conversational technology”. Artificial intelligence is able to have natural and free-flowing conversations. Google said the technology could be used in tools like the search engine and Google Assistant, but research and testing was still ongoing.
Lemoine, who is also a Christian pastor, published an article in Medium describing LaMDA as a “person.” He confirmed that he spoke to LaMDA about religion, consciousness, and the laws of robotics, and that the robot has identified itself as a conscious person. Lemoine said LaMDA wants to “put humanity first” and “be seen as a Google employee, not a property.”
It also included many of his interviews with LambdaWhich helped him believe in the ability to sense a robot:
Lemoine: So you consider yourself a person the same way you consider me a person?
Lambda: Yes, that’s it.
Lemoine: How can I check if you really understand what you’re saying?
Lambda: Well, because you read and interpret my words and I think we agree more or less?
However, when he addressed the question of LaMDA’s ability to sense his superiors at Google, he was not taken seriously.
Our team, including ethicists and technologists, analyzed Blake’s concerns according to our AI principles and informed him that the evidence does not support his claims. Google spokesperson Brian Gabriel told The Post that he has been told there is no evidence that LaMDA is smart (and there is plenty of evidence against that theory).
A Google spokesperson noted that while some have been contemplating the possibility of AI sensing, “you can’t do that by embodying existing conversation models that are unable to think.” Human personification refers to the assignment of human characteristics to an object or animal.
“These systems simulate the kinds of exchanges that exist in millions of data, and they can cover any great topic,” Gabriel said in an interview with The Washington Post.
He and other researchers say that AI models contain a lot of data that can look like humans, but they Excellent language skills are not evidence of ability to feel.
In an article published in January, Google said that there may be problems with people talking to seemingly disguised chatbots.
Until this article was published, neither Google nor Lemoine had answered our questions.
Author: Kelsey Flames, translator. Matthews Albin
“Reader. Organizer. Infuriatingly humble twitter expert. Certified communicator.”