NexTV Africa & Middle East

Complete News World

No, Google’s LaMDA AI program is not aware of this

No, Google’s LaMDA AI program is not aware of this

(CNN Business) – All companies are constantly promoting the capabilities of artificial intelligence (AI) that is constantly improving. But Google quickly retracted claims that one of its software had advanced so far that it became conscious.

According to a revealing article in Washington Post A Google engineer said on Saturday that after hundreds of interactions with an unprecedented advanced artificial intelligence system called lambdaThe program is believed to have reached a level of awareness.

In interviews and public statements, many in the AI ​​community have dismissed the engineer’s claims, while some have pointed out that his version highlights how the technology can lead people to assign human traits to it. But the belief that Google’s AI can be conscious confirms our fears and expectations about what this technology can do.

LaMDA, which stands for Language Model for Dialog Applications, is one of several large-scale AI systems that have been trained on large swaths of Internet text and can respond to written prompts. These systems are mainly tasked with finding patterns and predicting which words or words should come next. Such systems have become increasingly good at answering questions and writing in ways that can appear convincingly human, and Google itself demonstrated LaMDA last May in a blog post that it can “engage seamlessly in a seemingly infinite number of topics”. But the results can also be ridiculous, bizarre, disturbing, and prone to distraction.

The statement’s architect, Blake Lemoine, reportedly told Washington Post which shared the evidence with Google that LaMDA was aware of, but the company disputed. In a statement, Google said Monday that its team, which includes ethicists and technologists, “reviewed Blake’s concerns against our AI principles and informed him that the evidence does not support his claims.”

See also  Meet Olly, a bipedal robot on wheels capable of somersaulting (+ video)

On June 6, Lemoine posted on Medium that Google had placed him on paid administrative leave “in connection with an investigation into AI ethical concerns he was raising within the company” and that he could be fired “soon.” (He mentioned the experience of Margaret Mitchell, who was the leader of the ethical AI team at Google until Google layoffs early 2021 toAfter being frank about Coming out late 2020 From then co-leader Timnit Gebru. Gebru was fired after internal fights, including one over a research paper that asked the company’s AI leadership to withdraw from consideration of the presentation at a conference, or have his name removed).

A Google spokesperson confirmed that Lemoine remains on administrative leave. according to Washington Postfor violating the company’s confidentiality policy.

Lemoine was not available for comment on Monday.

The continued emergence of powerful computer programs trained on big treasure data has raised concerns about the ethics that govern the development and use of this technology. And sometimes progress is seen in terms of what may come rather than what is currently possible.

Reactions from members of the AI ​​community to Lemoine’s experiment bounced around social media over the weekend, and generally came to the same conclusion: Google’s AI isn’t even close to consciousness. Abeba Birhane, Senior Fellow at IA Trustworthy at Mozilla, Tweet on Sunday: “We have entered a new era of ‘this conscious neural network’, and this time it will require a lot of energy to refute it.”

Gary Marcus, founder and CEO of Geometric Intelligence, which has been sold to Uber, and author of books including “Rebooting AI: Building Artificial Intelligence We Can Trust” described LaMDA’s idea as “DisturbanceClever in a tweet. And soon wrote a Blog post Noting that all of these AI systems perform pattern matching by extracting them from huge language databases.

See also  Signal acquires users in Ukraine and denies hacking

In an interview Monday with CNN Business, Marcus said systems like LaMDA are best seen as a “glorified version” of an autocomplete program that you can use to predict the next word in a text message. If you write “I’m really hungry, I want to go to,” you might suggest “restaurant” as the next word. But this is a prediction made using statistics.

“No one should think that autocompletion, even on steroids, is subliminal,” he said.

Blake Lemoine poses for a photo at Golden Gate Park in San Francisco, California, Thursday, June 9, 2022.

In an interview, Gebru, founder and CEO of the Distributed Artificial Intelligence Research Institute, or DAIR, said Lemoine is a victim of many companies claiming artificial intelligence or artificial general intelligence, an idea that refers to artificial intelligence that can perform human-like tasks and interact with us in meaningful ways. Not far behind.

For example, Ilya Sotskever, co-founder of OpenAI and chief scientist noted, Tweet in February that “today’s large neural networks may be somewhat self-aware”. And last week, Google Vice President of Research and Fellow Blaise Aguera y Arcas wrote in an article for The Economist that when he started using LaMDA last year, “I felt more and more like I was talking about something smart.” (This article now includes a note from an editor noting that Lemoine has since been “purportedly put on leave after being mentioned in an interview with Washington Post that LaMDA, Google’s chatbot, has become “aware”).

“What’s happening is there’s a race to use more data, more computing, to say you created this generic thing that knows everything, answers all your questions or whatever, and that’s the drum you’ve been hitting,” Gebru said. “So how are you surprised when this person takes it to the extreme?”

See also  Call of Duty Mobile and Battlefield Mobile videos have leaked from the competition

In its statement, Google noted that LaMDA has undergone 11 “separate reviews of AI principles” as well as “rigorous research and testing” regarding quality, safety, and the ability to make fact-based claims. “Of course, some in the broader AI community are considering the long-term possibility of conscious or general AI, but it doesn’t make sense to do so by embodying current conversation models, of which they are not aware,” the company said.

“Hundreds of researchers and engineers have spoken with lambda and we don’t know that anyone else is making such far-reaching claims, or embodying lambda, as Blake did,” Google said.