Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Artificial intelligence experts express concern over Elon Musk-backed letter citing their research

Artificial intelligence experts express concern over Elon Musk-backed letter citing their research

Four artificial intelligence experts have expressed concern after their work was cited in an open letter – co-signed by Elon Musk – demanding an urgent pause in research.

The letter, dated March 22 and with more than 1,800 signatures on Friday, called for a six-month circuit breaker in developing systems “more powerful” than Microsoft-backed OpenAI’s new GPT-4, which can hold a conversation. human-like. , compose songs and summarize long documents.

Since the release of GPT-4’s predecessor, ChatGPT, last year, rival companies have been rushing to launch similar products.

The open letter says AI systems with “human competitive intelligence” pose serious risks to humanity, citing 12 research from experts including academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind.

Civil society groups in the US and EU have since pressed lawmakers to curb OpenAI research. OpenAI did not immediately respond to requests for comment.

Critics have accused the Future of Life Institute (FLI), the organization behind the letter which is primarily funded by the Musk Foundation, of prioritizing imagined doomsday scenarios over more immediate concerns about AI. , such as racist or sexist biases programmed into the machines.

Among the research cited was “On the Dangers of Stochastic Parrots,” a well-known article co-authored by Margaret Mitchell, who previously oversaw AI research ethics at Google.

Mitchell, now chief ethics scientist at artificial intelligence firm Hugging Face, criticized the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.

“By treating many questionable ideas as data, the letter affirms a set of priorities and a narrative about AI that benefits FLI supporters,” she said. “Ignoring active damage right now is a privilege that some of us don’t have.”

Its co-authors Timnit Gebru and Emily M. Bender criticized the letter on Twitter, with the latter calling some of its claims “unbalanced”.

FLI Chairman Max Tegmark told Reuters the campaign was not an attempt to hinder OpenAI’s commercial advantage.

“It’s pretty hilarious. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,’” he said, adding that Musk had no role in drafting the letter. “It’s not just one company.”

Risks now

Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with the mention of her work in the letter. Last year, she co-authored a research paper arguing that the widespread use of AI already poses serious risks.

His research has argued that the current use of AI systems could influence decision-making on climate change, nuclear war, and other existential threats.

She told Reuters: “AI does not need to reach human-level intelligence to exacerbate these risks.”

“There are non-existential risks that are really, really important, but don’t get the same kind of attention on a Hollywood level.”

Asked to comment on the reviews, FLI’s Tegmark said the short- and long-term risks of AI should be taken seriously.

“If we quote someone, it just means we claim they approve of that sentence. It doesn’t mean they approve of the letter, or that we approve of everything they think,” he said. at Reuters.

Dan Hendrycks, director of the California-based Center for AI Safety, who was also quoted in the letter, stuck to its content, telling Reuters it was smart to consider black swan events — those that seem unlikely, but would have devastating consequences.

The open letter also warned that generative AI tools could be used to flood the internet with “propaganda and untruth”.

Dori-Hacohen said he was “pretty rich” for Musk to have signed him, citing a reported increase in misinformation on Twitter following his acquisition of the platform, documented by civil society group Common Cause and d ‘others.

Twitter will soon launch a new fee structure for accessing its search data, which could hamper research on the topic.

“It had a direct impact on the work in my lab, and that done by others who study misinformation and disinformation,” Dori-Hacohen said. “We operate with one hand tied behind the back.”

Musk and Twitter did not immediately respond to requests for comment.

© Thomson Reuters 2023


From smartphones with rollable screens or liquid cooling, to compact AR glasses and handsets that can be easily repaired by their owners, we discuss the best devices we saw at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify , Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be generated automatically – see our ethics statement for details.

Tech

The post Artificial intelligence experts express concern over Elon Musk-backed letter citing their research appeared first on AfroNaija.



This post first appeared on AfroNaija.Com, please read the originial post: here

Share the post

Artificial intelligence experts express concern over Elon Musk-backed letter citing their research

×

Subscribe to Afronaija.com

Get updates delivered right to your inbox!

Thank you for your subscription

×