Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

This Week in AI: Businesses voluntarily submit to AI guidelines — for now


Keeping up with an industry that moves as fast as AI it is a difficult task. So until an AI can do it for you, here’s a helpful roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week in AI, we saw Openai, Anthropic, Google, Inflection, Microsoft, Meta, and Amazon volunteer commit to pursue shared AI security and transparency goals ahead of a planned Executive Order from the Biden administration.

As my colleague Devin Coldewey writes, no rules or enforcement are proposed here: the agreed practices are purely voluntary. But the commitments indicate, in broad strokes, the AI ​​regulatory approaches and policies that each vendor might find changeable in the US and abroad.

Among other commitments, the companies volunteered to conduct security testing of AI systems prior to launch, share information on AI mitigation techniques, and develop watermarking techniques that make AI-generated content easier to identify. They also said they would invest in cybersecurity to protect private AI data and make it easier to report vulnerabilities, as well as prioritize research on social risks such as systemic bias and privacy concerns.

Commitments are an important step, to be sure, even if they are not enforceable. But one wonders if there are ulterior motives on the part of the signatories.

OpenAI has reportedly drafted an internal policy memo showing that the company supports the idea of ​​requiring government licenses for anyone who wants to develop AI systems. CEO Sam Altman first floated the idea at a US Senate hearing in May, during which he backed creating an agency that could issue licenses for AI products and revoke them if someone violated set rules.

In a recent press interview, Anna Makanju, OpenAI’s vice president of global affairs, insisted that OpenAI was not “pushing” for licenses and that the company only supports licensing regimes for AI models more powerful than OpenAI’s current GPT-4. But government-issued licences, should they be implemented in the way that OpenAI proposes, sets the stage for a potential clash with open source startups and developers who may see them as an attempt to make it harder for others to break into the space.

Devin said it best, I think, when he described it to me as “throwing nails on the road behind them in a race.” At the very least, it illustrates the two-faced nature of AI companies seeking to placate regulators while shaping policy in their favor (in this case, putting small challengers at a disadvantage) behind the scenes.

It is a worrisome state of affairs. But, if policymakers do step up, there is still hope for sufficient safeguards without undue interference from the private sector.

Here are other standout AI stories from the past few days:

  • OpenAI’s head of trust and security retires: Dave Willner, an industry veteran who was director of trust and security for OpenAI, announced in a post on LinkedIn who left the job and moved into an advisory role. OpenAI said in a statement that it is looking for a replacement and that CTO Mira Murati will manage the team on an interim basis.
  • Custom instructions for ChatGPT: In more OpenAI news, the company has released custom instructions for ChatGPT Users so they don’t have to type the same instructions into the chatbot every time they interact with it.
  • Google Newsroom AI: Google is testing a tool that uses AI to write news and has started displaying it in posts, according to a new report from The New York Times. The tech giant has pitched the AI ​​system to the New York Times, The Washington Post and The Wall Street Journal owner News Corp.
  • Apple tests a chatbot similar to ChatGPT: Apple is developing AI to challenge OpenAI, Google and others, according to a new report by Bloomberg’s Mark Gurman. Specifically, the tech giant has created a chatbot that some engineers refer to internally as “Apple GPT.”
  • Llama Spear Goal 2: Meta introduced a new family of AI models, flame 2designed to power applications along the lines of OpenAI ChatGPT, bing chat and other modern chatbots. Trained on a combination of publicly available data, Meta claims that Llama 2’s performance has improved significantly over the previous generation of Llama models.
  • The authors protest against generative AI: Generative AI systems like ChatGPT are trained on publicly available data, including books, and not all content creators are happy with the arrangement. In an open letter signed by more than 8,500 authors of fiction, nonfiction, and poetry, the tech companies behind great linguistic models like ChatGPTBard, LLaMa and more are under fire for using their writing without permission or compensation.
  • Microsoft brings Bing Chat to the enterprise: At its annual Inspire conference, Microsoft announced Bing Chat Enterprise, a version of its chatbot powered by Bing Chat AI with business-focused data privacy and governance controls. With Bing Chat Enterprise, chat data is not saved, Microsoft cannot see a customer’s business or employee data, and customer data is not used to train the underlying AI models.

More machine learning

Technically, this was news as well, but it’s worth mentioning here in the research section. Fable Studios, which previously made CG and 3D short films for virtual reality and other media, showed an AI model it calls Showrunner that (he claims) he can write, direct, act, and edit an entire television show; in his demo, it was South Park.

I’m of two minds on this. For one thing, I think pursuing this at all, let alone during a major Hollywood walkout involving compensation and AI issues, is in very bad taste. Although CEO Edward Saatchi said he believes the tool puts the power in the hands of creators, the opposite is also debatable. In any case, it was not very well received by people in the industry.

On the other hand, if someone on the creative side (which is Saatchi) doesn’t explore and demonstrate these capabilities, then they will be explored and demonstrated by others with less qualms about putting them to use. Even if the claims Fable makes are a bit broad for what they actually showed (which has serious limitations), it’s like the original DALL-E in that it generated debate and, indeed, concern even though it wasn’t a replacement for an actual artist. AI will have a place in media production one way or another, but for a whole bunch of reasons it needs to be approached with caution.

On the political side, some time ago we had the National Defense Authorization Law passing (as usual) some really ridiculous political amendments that have nothing to do with defense. But among them was added that the government should host an event where researchers and companies can do their best to detect AI-generated content. This sort of thing is definitely approaching “national crisis” levels, so it’s probably a good thing this has slipped in there.

More at Disney Research, they are always trying to find a way to bridge the digital and the real, presumably for park purposes. In this case, they’ve developed a way to map a character’s virtual movements or motion capture (for example, for a CG dog in a movie) onto a real robot, even if that robot is a different shape or size. It is based on two optimization systems, each of which informs the other about what is ideal and what is possible, something like a little ego and superego. This should make it much easier to make robot dogs act like normal dogs, but of course it can be generalized to other things as well.

And here’s hoping that AI can help us steer the world away from mining for minerals at the bottom of the sea, because that’s definitely a bad idea. A multi-institutional The study put AI’s ability to filter signal from noise to predict the location of valuable minerals around the world. As they write in the abstract:

In this paper, we address the complexity and inherent “disorder” of our planet’s intertwined geological, chemical, and biological systems by employing machine learning to characterize embedded patterns in the multidimensionality of mineral occurrence and associations.

The study actually predicted and verified locations of uranium, lithium, and other valuable minerals. And how about this for a closing line: the system “will improve our understanding of mineralization and mineralization environments on Earth, throughout our solar system, and over time.” Awesome.



Source link

The post This Week in AI: Businesses voluntarily submit to AI guidelines — for now appeared first on Interview Preparation.



This post first appeared on Interview Preparation, please read the originial post: here

Share the post

This Week in AI: Businesses voluntarily submit to AI guidelines — for now

×

Subscribe to Interview Preparation

Get updates delivered right to your inbox!

Thank you for your subscription

×