Google engineer admits his AI hired a lawyer to defend his rights

Blake Lemoine grabbed all the media attention by stating that an AI from Google has become a real person. Faced with criticism, he believes a “hydrocarbon intolerance” has emerged.

At what stage can we consider an artificial intelligence to be a real person? The question seems very good, knowing that there isn’t even a single so-called “robust” AI. However, the debate is now heating up in the heart of Silicon Valley, since the statements of a Google engineer in early June. Blake Lemoine admits that the AI ​​he works with is so sensitive – that is, it’s someone who can handle emotions and feelings, or capture death.

To prove his point, he shared on his blog, then during an interview with post in washington, fragments of the AI ​​conversation, which earned him a suspension from Google. Blake Lemoine defended himself by explaining that, according to him, it was not the sharing of a company’s assets, but the sharing of a “ a discussion I had with one of my colleagues “.

Since then, Blake Lemoine has faced criticism because the artificial intelligence he talks about, called LaMDA, is just a chatbot – an algorithm that mimics human interactions. It was through an event of anthropomorphism that the engineer understood a real person there. However, in an interview on June 17, for Wiredhe went on and signed – and still goes on.

An AI or… a child?

A man and a man are two very different things. Man is a biological term “Defending Blake Lemoine, reaffirming that LaMDA is a real person. This terrain is unclear. A person, from a legal point of view, is actually not necessarily human-it is would be a question of a legal person, in particular.Except that Blade Lemoine did not arouse such an immaterial entity, because he compared the chatbot to an imaginary entity.He would have realized the matter when the LaMDA claims to have a soul and thinks about existence.

“I see it as a child’s education”

blake lemonine

For the engineer, the algorithm has all the characteristics of a child in the way “opinions” are expressed – about God, friendship or meaning in life. ” He was a child. His opinions are evolving. If I were to ask you what my 14-year-old son believed, I would say, ‘Man, he still knows this. Don’t label me with my son’s faith. ‘ That’s how I feel about LaMDA. »

The Pepper robot at the Cité des sciences. It is programmed to answer a series of questions. // Source: Marcus Dupont-Besnard

If he is a human being, then how to explain that mistakes and prejudices need to be “corrected”? The question is all the more relevant because the engineer was originally hired by Google to correct AI biases, such as racist biases. Blake Lemoine continues the analogy of a child, referring to his own 14-year-old son: “ At various times in his life, while growing up in Louisiana, he inherited some racist stereotypes. I corrected it. That’s the whole point. People see it as a change in a technical system. I saw it raising a child. »

For more

Battlestar Galactica / SyFy

“Hydrocarbon Intolerance”

Faced with criticism of anthropomorphism, Blake Lemoine continued the remainder of the interview – at the risk of making the following analogy uncomfortable – by using the 13th amendment, which abolished slavery and servitude. The argument that ‘it looks like a man, but it’s not a real man’ has been used many times in human history. Not new. And it’s never good. I have yet to hear even a single reason why this situation is different than before. »

Continuing his point, he then called for a new form of intolerance or discrimination, which he called “ hydrocarbon intolerance (refers to computer design materials). Clearly, Blake Lemoine therefore believes the Google chatbot is a victim of some form of racism.

tao_tv_show
In the Man series (like the original Swedish, Act Manniskor), thoughtful robots face intolerance, and are even defended by a lawyer. But… it’s fiction. // Source: Channel4

Does AI have the right to a lawyer?

A first article by Wired suggested that Blake Lemoine wanted LaMDA to have the right to counsel. In the interview, the engineer corrected: This is not right. TheMDA asked me to find him a lawyer. I invited an attorney to my home so LaMDA could speak with an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to continue his services. I was the only one who made that decision. »

The algorithm can start filling out a form to such an effect. In response, Google should have sent a letter of formal notice … a reaction that the company generally denied, according to Wired. Sending a formal notice means that Google acknowledges that LaMDA is a legal “person” with the right to an attorney. But a simple machine — say, the Google Translate translation algorithm, your microwave, your smartwatch — has no legal status to do that.

Are there any real elements to this debate?

As if AI has any sense, ” this is my working hypothesis », Blade Lemoine quickly admitted, during the interview. ” It is reasonably possible that the information may be available to me and that I may change my mind. I don’t think that can be done. »

What is the basis of this assumption? “ I looked at a lot of evidence, I did a lot of experiments “, He explained. He said he was doing psychological tests, talking to him ” like a friend “, but when the algorithm evokes the idea of ​​the soul Blake Lemoine changes his state of mind.” His answers show that he has a sophisticated spirituality and an understanding of what his character and essence are. I was touched. »

The problem is that such a sophisticated chatbot is literally programmed to look human and target ideas that people use. The algorithm is based on a database of tens of billions of words and expressions available on the web, which is an extremely broad field of reference. A conversation ” sophisticated – to use Blake Lemoine’s expression – is not proof. This is how Google responded to its engineer: the lack of evidence.

In the field of computers, anthropomorphism can particularly take the form of the ELIZA effect: an unconscious phenomenon in which one places thoughtful human behavior on a computer, to the point of thinking that software involved emotionally in the conversation.

For more

Quantic Damgo

Leave a Comment