Science without conscience is only the destruction of AI

Science without conscience is only the destruction of AI

Google has fired one of its engineers, Blake Lemon, who said the artificial intelligence he works with can sense “human emotions”. This question of conscientious objectors is not new, but advances in the field of AI have brought it to the present day. According to the majority of experts, the truth is that this opportunity is far away.

He referred to him as “a nice little kid who wants to help the world” and asked his co-workers to “take care of him when he goes”. Blake Lemoine has been placed on “administrative leave” by Google. Posted by The Washington Post on Saturday, June 11th. In question: The “little kid” closest to this engineer is an artificial intelligence (AI) named Lambda.

Blake Lemoine argued with his superiors that this mechanism created the form of consciousness and was capable of perceiving “human emotions.” And he did not stop. He was an advocate for Lamda’s “rights” and contacted congressional officials to discuss “Google’s unethical practices.” [à l’égard de cette IA]”, The Washington Post summarizes.

Learn transcendental meditation

And, officially, Google has fired its engineer who worked for the Internet company for seven years for violating the confidentiality rules of its research. But, in general, “large groups try to keep as much distance as possible from anything that is controversial and the question of the perception of machines clearly falls into this category,” promises Reza Vesi, an expert in cognitive science and technology. Artificial intelligence. At Kansas State University.

But Blake Lemoine has no intention of quietly setting himself aside. On the day he published the article in the Washington Post, he published the first long post on Medium Platform Transcription. Excerpts from discussions he had with Lambda. Then, this engineer picked up the pen Drive home to the point, Still in the medium, he explained to this algorithm “began to learn deep meditation”. And for him, the fact that Blake was unable to pursue this endeavor after learning of Lemoine’s permission would have expressed much human frustration. “I do not understand why Google refuses to offer her something so simple and inexpensive: the right to consult with her to get her approval before each trial,” the researcher concludes.

This Better media opening The disagreement between Google and its former employee about AI’s conscience did not fail to elicit widespread repercussions in the scientific community. Most artificial intelligence experts promise that Blake Lemoine “made a mistake in providing the missing mechanical properties”, for example Claude Touzet, an expert in neuroscience and artificial neuron networking at Aix-Marseille University.

“He goes a long way in his claims, without providing conclusive evidence to substantiate his claims,” ​​says Jean-Gabriel Canasia, a computer scientist, philosopher and chairman of the CNRS ethics committee.

In fact, Blake confirms that he was surprised by Lemo’s comments and the synchronization of LaDMA’s speech. Thus, during the exchange about the difference between a slave and a servant, it was confirmed that this AI did not understand the nuances associated with the salary paid to one, but to the other. As a machine, the truth is that she does not need money. “It was this level of self-awareness that prompted me to dig deeper,” says Blake Lemoine.

LaMDA, a sophisticated “chatbot”

It is true that “the ability to reflect on one’s own condition is one of the ways of defining consciousness,” Jean-Gabriel Canasia acknowledges. But Lambda’s answer did not prove that the machine did not know what it was and what it felt. “You have to be very careful: the algorithm is designed to generate answers and in the current performance of language models, it is not surprising that they appear to be synchronous,” assures Nicholas Saboret, a computer science professor and artificial intelligence expert. At the University of Paris-Chockley.

This is no less surprising with LaMDA. This conversational agent – also known as “chatbot” – uses the latest language sampling techniques. “In 2018, there was a revolution in the introduction of parameters”, Sophie Rosset summed up, research director of the Interdisciplinary Laboratory on Digital Science and an expert in human-machine communication systems.

Since then, chatpots have become increasingly successful in talking emotionally to people and deceiving people. LaMDA benefits from another advantage. Lawrence DeVilleurs, professor of artificial intelligence at CNRS and author of the book, notes that “he was able to learn hundreds of millions of conversations between Internet users that Google could recover on the Internet.” “Emotional robots”. In other words, this AI is, statistically, one of the richest libraries of semantic contexts to determine what is the best answer.

The dialogue, recreated in Blake Lemoine’s medium, acknowledges Sophie Rosett, “managing the fluidity and semantic changes of exchanges, that is, the transformation of subjects, by Lamda”.

But to scientifically conclude that this AI has consciousness, a lot more is needed. Also, there are tests that, even if they are not perfect, provide more convincing results than a conversation with an engineer. Alan Turing, one of the pioneers of artificial intelligence, established a protocol in the 1950s that could establish whether a man could be repeatedly deceived by an AI and believe that he was talking to one of its mates.

The myth of Frankenstein

Advances in natural language models show the limitations of the Touring experiment. Other recent trials have called for two dialogue agents to develop a new language that has nothing to do with what they’ve learned, explains Reza Vesi. Who created such a test. According to him, this exercise will make it possible to evaluate the “creativity that suggests the form of consciousness of the machine”.

There is no indication that Lamda will be able to successfully overcome this obstacle, and that “we are likely to be in the presence of a classic anthropological projection. [prêter des attributs humains à des animaux ou des objets, NDLR]”, Claude Touzet promises.

After all, this case illustrates the desire to bring conscious artificial intelligence into the world, even among the AI ​​experts at Google. “This is Frankenstein’s myth and the desire to first create an individual who is conscious outside of natural reproduction,” Nicholas Saporet promises.

But in the case of AI, it is “sometimes the choice of the wrong words, which may have given the impression that we are trying to shape man”, the expert adds. The expression of artificial intelligence, when “it is programming,” gives the impression that the algorithm is intelligent, says Nicholas Saporet. The same goes for the expressions “neural networks” or “machine learning” that refer to human characteristics.

He believes this whole affair could be detrimental to artificial intelligence research. In fact, it can give the impression that the sector is nearing a turning point that is not on the horizon in any way, which can “create false beliefs with disappointments as a result”.

After all, if this Google engineer can be fooled by his AI, Lawrence DeVilleurs promises that “we are at a turning point in terms of language simulation”. Algorithms like LaMDA have become so powerful and complex that “we play wizard trainers with systems that, in the end, we do not know what they are capable of,” he adds.

For example, an AI who specializes in dialectical arts such as LaMDA would ask, “Was it used to persuade someone to commit a crime?” Asks Jean-Gabriel Canasia.

According to Lawrence DeVilleurs, AI’s research has reached a point where it’s urgent to put ethics back at the center of the debate. “We have submitted the opinion of the National Pilot Committee on Digital Ethics in this regard. Conversation Agent Ethics in November 2021“, She notes.

“On the one hand, these engineers working in large groups need to have an ethic and be responsible for their work and words,” the expert assures. On the other hand, he believes that this issue demonstrates the urgency of setting up “independent expert groups” that can set ethical standards for the entire sector.

Check Also

Olympic Champion Andy Murray

Former Olympic Champion Andy Murray Set to Miss Out on 2024 Games

Triumphing in 2012 and 2016, Andy Murray became the first tennis player of any gender …

Leave a Reply

Your email address will not be published. Required fields are marked *