Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Meta Needs You in Its Generative AI Gambit

Meta wants you to help them with their generative AI initiatives and is all out conducting community programs to achieve it. In November last year, Meta announced that it would run Community Forums as a way to help the company make decisions on their technologies. Allowing a diverse group of people to discuss issues and offer perspectives and recommendations, Meta believes, would ‘improve the quality of governance’. Meta’s focus at that time was metaverse. 

Collaborating with Stanford University, the results of the first global deliberative poll was released last month, which involved 6300 people from 32 countries and nine regions around the world. The participants spent hours in conversations via online group sessions and interacted with non-Meta experts about the issues under discussion. The topic: Moderation and monitoring systems for bullying and harassment in the metaverse. Months of experiment and ironically, metaverse is no longer relevant. However, 82% of the participants recommended the same deliberative democracy format be followed by Meta for making future decisions — Meta has decided to follow a similar process for their generative AI tech. 

Humans in the Loop 

Quite literally, keeping people in the loop for decision-making is Meta’s new model. Last month, the company launched a Community Forum on Generative AI with the goal of gaining feedback on what people would ‘want to see reflected in new AI technologies’. Meta believes in incorporating people and experts’ say in product and policy decisions around generative AI, and they claim to be actively working with academics, researchers and community leaders. But, why the push? 

Having faced enough flak in the past around capturing user information and breaching data privacy on social media platforms (Facebook and Instagram), Mark Zuckerberg must be probably pulling a reverse move by seeming to give control to people for formulating the next step. 

Meta is also a founding member of Partnership on AI, a non-profit community, since 2016, where they work with industry experts, organisations, media and others to address concerns about the future of AI and to formulate ‘right ethical boundaries’. Ironically, Meta’s recently launched microblogging platform Threads, coerces users to give access to personal information on the phone in order to use the app. 

Not The Best Approach 

The human-feedback system that Meta is experimenting does come with its limitations. How much of people’s feedback is flawless and how much of it can be implemented in the system is questionable. On their pilot community program for mitigating bullying in metaverse, the participants were not aligned with punishing the users involved in repeated bullying and harassment. For instance, removing members-only spaces that saw repeated bullying had only 43% support. 

Furthermore, the participants had no interaction with the decision makers i.e. Meta employees which made the process seem like a simple survey or an experiment on data-gathering rather than a democratic exercise. 

In Others I Trust

With the countless talks that’s surfacing around AI safety guidelines and the need for universal regulatory policies, every major tech is claiming to work towards it. Meta is no exception in following another tech company’s work — OpenAI. Meta is trying hard to catch up with OpenAI and speeding ahead in the open source LLM race. Tracing the reigning chatbot maker’s path, Meta seems to be going through the democratic decision-making policy that OpenAI is pursuing. 

OpenAI had announced grants with $1 million to fund experiments to democratise AI rules and best solve AI safety mishaps. The company also announced another million for their cybersecurity grant program, for creation and advancement of AI-powered cybersecurity tools and technologies. In other words, a program where people can help create/fix the company’s security framework. 

While the move can be critically looked at as a tactic to avoid the government from interfering with the company’s plan for AI regulation, or even as a way to seem like a responsible company working ‘for the people’, big tech is slowly adopting the democratic route. 

Recently, Anthropic spoke about how they would work better on constitutional AI by talking to people and not just experts. DeepMind, recently released a paper that addresses and investigates how international institutions can help manage and mitigate AI risks. In one of the complementary models that the company proposed, an AI safety project aims to bring together researchers and engineers to access advanced AI models for research.  

The post Meta Needs You in Its Generative AI Gambit appeared first on Analytics India Magazine.



This post first appeared on Analytics India Magazine, please read the originial post: here

Share the post

Meta Needs You in Its Generative AI Gambit

×

Subscribe to Analytics India Magazine

Get updates delivered right to your inbox!

Thank you for your subscription

×