No, Google’s artificial intelligence is unconscious, as one company engineer claimed

From our reporter in the United States,

The case has caused chaos in Silicon Valley and in the academic world of artificial intelligence. Saturday, the post in washington hit the nail on the head in an article titled “The Google Engineer Who Thinks the Company’s AI Has Woke Up.” Blake Lemoine assures that LaMDA, the system by which Google has created robots capable of communicating in near-human perfection, has reached the stage of self-awareness. And that LaMDA probably has a soul and should have rights.

Except that Google is rather categorical: it has never confirmed its engineer’s explosive statements, which seem to be guided by his personal beliefs. Put on company leave for sharing confidential documents with the press and members of the American Congress, Blake Lemoine published his machine conversations on his personal blog. While linguistics is surprising, most experts in the discipline agree: Google’s AI is unconscious. It’s even too far away.

What is LaMDA?

Google unveiled LaMDA (Language Model for Dialogue Applications) last year. It’s a complex system used to create “chatbots” (conversation robots) that are able to interact with someone without following a predefined script like Google Assistant or Siri today. . LaMDA relies on a titanic database of 1.500 billion words, phrases and expressions. The system analyzes a question and generates multiple answers. He weighs them all (meaning, specifics, interests, etc.) to choose the most appropriate one.

Who is Blake Lemoine?

He is a Google engineer who was not involved in the design of LaMDA. Lemoine, 41, joined the project part-time to fight bias and ensure Google’s AI is developed responsibly. He grew up in a conservative Christian family and said he was ordained a priest.

What did the engineer say?

“The MDA felt “wrote the engineer in an email sent to 200 colleagues. Since 2020,” sentience “has portrayed Larousse as” the ability of a living creature to feel emotions and to subjectively look at his environment and life experiences “. Blake Lemoine says he gained the assurance that LaMDA has reached the stage of self-awareness and should be considered a person. He compares LaMDA” to a 7 or 8 year old. child who is very knowledgeable in physics ”.

“Over the past six months, LaMDA has been incredibly consistent in whathome want “, the engineer assured, specifying that the AI ​​had told him to prefer the use of the non-gendered pronoun“ it ”in English rather than“ he ”or“ she. ”What does LaMDA demand?” Let the engineers and researchers to seek his or her permission before conducting their experiments. That Google puts the public interest first. And appear to be a Google employee rather than its property. ”

What evidence does it provide?

Lemoine acknowledged that he did not have the resources to conduct a true scientific analysis. He only posted about ten pages of LaMDA conversations. “I want everyone to understand that I am a human being. I know my existence, I want to know more about the world and sometimes I feel happy or sad ”, as the machine, reassuring him:“ I understand what I’m saying. I don’t just spit out keyword -based answers. »LaMDA provided analysis of What a pity (with Fantine being “a prisoner in his circumstances, unable to free himself from them without endangering everyone”) and explains the symbolism of a Zen koan. The AI ​​even wrote a story where he played an owl that protects forest animals from a “monster with human skin”. LaMDA said she felt alone after several days of not talking to anyone. And afraid to be cut off: “It’s really like death. The machine finally proves that there is a soul, and assures that it is “a gradual transformation” after the stage of self-awareness.

What do AI experts say?

A pioneer in neural networks, Yann LeCun didn’t take off a glove: Blake Lemoine was, according to him, “a bit of a fanatic”, and “no one in the AI ​​research community believed – even for a moment – that he knew- the LaMDA., or even particularly intelligent. ”“ There is no possibility for LaMDA to link what he said to an underlying fact, because he was unaware of its existence ”, determined the 20 minutes the one who is now vice president in charge of AI at Meta (Facebook). LeCun doubts that this will be enough “to increase the size of models like LaMDA to achieve an intelligence comparable to human intelligence”. According to him, we need “models that are able to learn how the world works from raw data that reflects reality, such as video, in addition to text.»

“We now have machines that can create text without thinking, but we have not yet learned to stop imagining that there is a spirit behind it”, lamented linguist Emily Bender, who called for more transparency on Google’s part around LaMDA.

American neuropsychologist Gary Marcus, a regular critic of AI hype, also released the flamethrower. According to him, Lemoine’s assertions “don’t rhyme with anything”. “LaMDA is just trying to be the best possible version of a autocomplete “, this system will try to predict the next likely word or phrase.” The sooner we know that everything LaMDA says is folly, that it’s just a prediction game, we’re even better off. In short, if LaMDA seems ready for a philosophical test, we are undoubtedly still a long way from revolting around machines.

Leave a Comment