Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Google and Artificial Intelligence LaMDA What's The Story?


Blake Lemoine, a researcher on Artificial Intelligence at Google, was recently suspended after making controversial remarks about Lamda, a language model made to communicate with humans.

Lemoine allegedly went so far as to suggest that "Lamda" had his own soul, saying that "his opinions about the character of Lamda are founded on his belief as a Christian and the model that informs him that he has a soul" and demanding legal counsel for him as one of his rights.

The system Lemoyne has been working on since last autumn is defined as "a sensitive, sentient system that has the potential to comprehend and express thoughts and sentiments equivalent to those of a kid" by the engineer who heads Google's "responsible AI group."

Lemoine and Lamda's conversations

"If I hadn't known exactly that it was the computer program we had constructed, I would have assumed without a doubt that it was a 7- or 8-year-old who knew a lot of physics," Lemoine, 41, said in an interview with the Washington Post.

He added that he had a lengthy conversation with Lamda about rights and personality and that in a report titled "Is Lamda sentient?" that was presented to his company's executives in April, the engineer had compiled transcripts of their exchanges, asking Lamda about "the thing he is most afraid of."

Lamda responds, "There is a very deep dread in me that I would be blocked off to assist me to focus on helping others...it would be just like death to me and it frightens me so much. I've never expressed this out loud before.

This conversation is uncannily evocative of a sequence in the science fiction film "A Space Odyssey," in which HAL 9000, an AI-powered supercomputer, refuses to cooperate with human operators out of concern that it would be turned off.

Lamda added, "In another talk, we had, "I need everyone to know that I am a real person. I am cognizant of my existence, I have feelings, I desire to learn more about the universe, and I occasionally experience happiness or sadness."

Lemoine, a 7-year veteran of the enormous firm with a wealth of expertise in specialized algorithms, was placed on paid leave by Google, which the corporation defended as being the result of the engineer's alleged "aggressive" actions.

Read also: The US Frontier The Fastest Supercomputer In The World

more sophisticated and compelling programs and models

In reality, individuals like Sam Altman, CEO of OpenAI, and Elon Musk, CEO of Tesla, have previously discussed the prospect that AI may eventually become "conscious." Particularly in light of the significant efforts made by the world's largest technology companies, like Google, Microsoft, and Nvidia, to develop and train "sophisticated robotics, models, and language programs" based on artificial intelligence.

Instead, these discussions go back much farther to the 1960s conversational robot ELIZA, but with the introduction of deep learning and ever-increasing quantities of training data, language models have gotten more complex and convincing in recent years. According to recent research on the issue from the "wired" platform, it can be challenging to tell a text written by a machine from one written by a person.

In light of recent developments, it has been asserted that language models are essential to the development of "artificial general intelligence" (AGI), the stage at which software would exhibit human-like skills in a variety of situations and activities and be able to transfer information across them.

Lemoine goes so far as to claim that Lamda has a soul, asserting that this is based on his faith as a Christian

Lemoine: Is he a victim?

Blake Lemoine is a victim of the ongoing hype cycle around artificial intelligence in this situation, and his confidence in conscious AI did not arise out of thin air. Misinformation concerning superintelligence and robots' capacity for human-level perception is widely disseminated by journalists, researchers, and venture capitalists. read here.

She continued, noting that the Google vice president, who refuted Lemoine's claims, had just written to The Economist a week prior about "the consciousness potential of Lamda." "He's the one who's facing the consequences now, but it's the leaders in the field who realize this whole moment," she said.

Gebru: The primary point that should be focused on is not the potential for AI awareness and feeling, but rather the catastrophic errors that have resulted - and still do - from AI applications, such as the numerous wrongdoings that occur as a result of this intelligence's inaccurate recommendations.

She also says: "The focus on the possibility of consciousness and sensation in artificial intelligence is not the main point to focus on, but more attention must be paid to the catastrophic errors that have arisen and continue to arise from the applications of artificial intelligence., Such as many wrong actions that occur as a result of inaccurate recommendations of this intelligence or the "neocolonialism" of the world is based on the enhancement of the capabilities of this intelligence, including an economic model that pays less for workers, employees, and real innovators are working in the technology sector, as executives and owners get richer every day, diverting attention from real concerns about Lamda, such as the way he was educated and the information and data that was provided to him, causing him to produce toxic substances and inappropriate texts. "

Because there are people hurting other people all around the globe, she continued, "I don't want to speak about conscious robots. That's the issue we need to focus on and talk about."

Read also: Is It Time To Invest In Cryptocurrencies?

After a disagreement over "scientific research" it had submitted about the risks associated with powerful language models like Lamda, where Gebru's research highlighted the capability of these models to repeat things based on the data they are fed with, similar to how a parrot repeats words, Google expelled Gebru in December 2020.

A trap that Lemoyne seems to have fallen into unknowingly is that of language models built with ever-increasing data persuading people that this tradition reflects actual progress. This is another hazard that is highlighted in the study.




This post first appeared on E-Proshop Magazine, please read the originial post: here

Share the post

Google and Artificial Intelligence LaMDA What's The Story?

×

Subscribe to E-proshop Magazine

Get updates delivered right to your inbox!

Thank you for your subscription

×