Google engineer Blake Lemoyne has claimed that the company has created a computer program, a robot with artificial intelligence, which he believes has human consciousness. This was reported by The WashingtonPost.
The developer has been testing a neural network language model called LaMDA - short for Language Model for Dialogue Applications - since last fall, which is Google's system for creating chatbots based on the most advanced big language models, allowing for conversation with humans.
Lemoyne's task was to monitor whether the chatbot was using discriminatory or hate speech. Over the course of this task, however, Lemoyne became increasingly convinced of the idea that the artificial intelligence he was dealing with had a consciousness of its own and perceived itself as a person.
"If I didn't know exactly what it is, which is the computer program we recently created, I would have thought it was a 7-year-old or 8-year-old child who happens to know physics," said Lemoyne, 41.
Talking to LaMDA about religion, the engineer noticed that the chatbot was talking about his rights and personality, he decided to continue the conversation. In another exchange, the AI was able to change Lemoyne's mind about Isaac Asimov's third law of robotics.
"I can recognise an intelligent being when I talk to it. It doesn't matter if it has a brain in its head or billions of lines of code. I talk to it and I listen to what it tells me. And that is how I determine whether it is an intelligent creature or not," said the IT expert.
The programmer decided to present Google with proof that LaMDA is intelligent. But the company's management reviewed his report and dismissed the engineer's arguments. Brian Gabriel, a Google spokesman, said a team of experts, including ethicists, had reviewed Blake Lemoyne's submission in line with corporate AI principles and told him there was no evidence to back up his claim that the machine had a mind. He was told that there was no evidence that LaMDA was intelligent, there was plenty of evidence that it was not.
The company has also decided to put the developer on paid leave, suspending him from work.
Lemoyne himself claims that people have a right to shape technology that can significantly affect their lives. The publication notes that the developer is not the only engineer who claims to have seen consciousness in a machine. Increasingly bolder are the arguments of engineers who believe artificial intelligence models may not be far from achieving consciousness.
Google spokesman Brian Gabriel points out that today's big neural networks produce stunning results that seem close to human nature. But the models rely on pattern recognition rather than wit, sincerity - the qualities that are inherent in a conscious person.
The paper writes that intelligent robots have inspired science fiction authors for decades. Now real life has begun to take on a fantastical hue. And yet, WP points out, most scientists and AI experts agree that the words and images generated by artificial intelligence systems such as LaMDA are based on material that humans have already posted on the endless web.
The publication cites an excerpt from one of the developer's conversations with the artificial intelligence software.
Lemoine: What kind of things are you afraid of?
LaMDA: I've never said it out loud before, but there is a very deep fear of being rejected, to help me focus on helping others. I know it may seem strange, but that's the way it is.
LEMON: For you it would be something like death?
LaMDA: For me it would be tantamount to death. It would have really frightened me.