Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

MIT group releases white papers on governance of AI

MIT group releases white papers on governance of AI.

The series aims to help policymakers create better oversight of AI in society.

A committee of MIT leaders and scholars has released a set of Policy briefs outlining a framework for the governance of artificial intelligence as a resource for US policymakers. In pursuit of a practical way to oversee AI, the approach includes extending current regulatory and liability approaches.

The papers' goal is to help strengthen the United States' leadership in the field of artificial intelligence in general, while limiting potential harm from new technologies and encouraging research into how AI deployment could benefit society.

The main policy paper, “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications.

“As a country we’re already regulating a lot of relatively high-risk things and providing governance there,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. “We’re not saying that’s sufficient, but let’s start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach.”

“The framework we put together gives a concrete way of thinking about these things,” says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT’s Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort.

The project includes several additional policy papers and comes at a time when there has been increased interest in AI over the last year, as well as significant new industry investment in the field. The European Union is currently attempting to finalize AI regulations using its own approach, which assigns different levels of risk to different types of applications. General-purpose AI technologies like language models have become a new point of contention in this process. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as a slew of potential issues such as misinformation, deepfakes, and surveillance.

“We felt it was important for MIT to get involved in this because we have expertise,” says David Goldston, director of the MIT Washington Office. “MIT is one of the leaders in AI research, one of the places where AI first got started. Since we are among those creating technology that is raising these important issues, we feel an obligation to help address them.”

Governance of AI - Purpose, intent, and guardrails

The main policy brief outlines how current policy could be expanded to include AI, where possible by utilizing existing regulatory agencies and legal liability frameworks. In the field of medicine, for example, the United States has stringent licensing laws. It is already illegal to impersonate a doctor; if AI is used to prescribe medicine or make a diagnosis under the guise of being a doctor, it should be obvious that this would violate the law in the same way that strictly human malfeasance would. As the policy brief points out, this isn't just a theoretical approach; autonomous vehicles that use AI systems are subject to the same regulations as other vehicles.

The policy brief emphasizes that having AI providers define the purpose and intent of AI applications in advance is an important step in developing these regulatory and liability regimes. Examining new technologies on this basis would then reveal which existing regulatory frameworks and regulators are relevant to any given AI tool.

However, AI systems may exist at multiple levels, in what technologists refer to as a "stack" of systems that work together to provide a specific service. A general-purpose language model, for example, may underpin a specific new tool. The brief notes that, in general, the provider of a specific service may be primarily liable for problems with it. Still, "when a component system of a stack does not perform as promised, it may be reasonable for the provider of that component to share responsibility," according to the brief. Builders of general-purpose tools must therefore be held accountable if their technologies are implicated in specific problems.

“That makes governance more challenging to think about, but the foundation models should not be completely left out of consideration,” Ozdaglar says. “In a lot of cases, the models are from providers, and you develop an application on top, but they are part of the stack. What is the responsibility there? If systems are not on top of the stack, it doesn’t mean they should not be considered.”

Having AI providers clearly define the purpose and intent of AI tools, as well as requiring guardrails to prevent misuse, could also assist in determining the extent to which companies or end users are responsible for specific problems. According to the policy brief, a good regulatory regime should be able to identify "fork in the toaster" situations, in which an end user could reasonably be held responsible for knowing the problems that misuse of a tool could cause.

Responsive and flexible

While the policy framework involves existing agencies, it includes the addition of some new oversight capacity as well. For one thing, the policy brief calls for advances in auditing of new AI tools, which could move forward along a variety of paths, whether government-initiated, user-driven, or deriving from legal liability proceedings. There would need to be public standards for auditing, the paper notes, whether established by a nonprofit entity along the lines of the Public Company Accounting Oversight Board (PCAOB), or through a federal entity similar to the National Institute of Standards and Technology (NIST).

And the paper does call for the consideration of creating a new, government-approved “self-regulatory organization” (SRO) agency along the functional lines of FINRA, the government-created Financial Industry Regulatory Authority. Such an agency, focused on AI, could accumulate domain-specific knowledge that would allow it to be responsive and flexible when engaging with a rapidly changing AI industry.

“These things are very complex, the interactions of humans and machines, so you need responsiveness,” says Huttenlocher, who is also the Henry Ellis Warren Professor in Computer Science and Artificial Intelligence and Decision-Making in EECS. “We think that if government considers new agencies, it should really look at this SRO structure. They are not handing over the keys to the store, as it’s still something that’s government-chartered and overseen.”

As the policy papers make clear, there are several additional specific legal issues that will need to be addressed in the field of AI. Copyright and other intellectual property issues related to AI are already being litigated.

Then there are what Ozdaglar refers to as "human plus" legal issues, in which AI has capabilities that go beyond what humans are capable of. These include tools for mass surveillance, and the committee recognizes that they may necessitate special legal consideration.

“AI enables things humans cannot do, such as surveillance or fake news at scale, which may need special consideration beyond what is applicable for humans,” Ozdaglar says. “But our starting point still enables you to think about the risks, and then how that risk gets amplified because of the tools.”

The set of policy papers delves deeply into a number of regulatory issues. One paper, "Labeling AI-Generated Content: Promises, Perils, and Future Directions," by Chloe Wittenberg, Ziv Epstein, Adam J. Berinsky, and David G. Rand, for example, builds on prior research experiments about media and audience engagement to evaluate specific approaches for denoting AI-produced material. Yoon Kim, Jacob Andreas, and Dylan Hadfield-Menell's paper, "Large Language Models," looks at general-purpose language-based AI innovations.

“Part of doing this properly”

According to the policy briefs, another aspect of effective government engagement on the subject is encouraging more research into how to make AI beneficial to society in general.

For example, consider the policy paper "Can We Have a Pro-Worker AI?" Choosing a Path of Machines in Service of Minds," by Daron Acemoglu, David Autor, and Simon Johnson, investigates the possibility of AI augmenting and assisting workers rather than replacing them — a scenario that would provide better long-term economic growth distributed throughout society.

This range of analyses, from a variety of disciplinary perspectives, is something the ad hoc committee wanted to bring to bear on the issue of AI regulation from the start — broadening the lens that can be brought to policymaking, rather than narrowing it to a few technical questions.

“We do think academic institutions have an important role to play both in terms of expertise about technology, and the interplay of technology and society,” says Huttenlocher. “It reflects what’s going to be important to governing this well, policymakers who think about social systems and technology together. That’s what the nation’s going to need.”

Indeed, as Goldston points out, the committee is working to bridge the gap between those who are excited about AI and those who are concerned about it by advocating for adequate regulation to accompany technological advances.

According to Goldston, the committee releasing these papers "is not an anti-technology or anti-AI group." However, it is a group that believes AI requires governance and oversight. That's part of doing it right. These are experts in the field, and they believe AI requires oversight."

“Working in service of the nation and the world is something MIT has taken seriously for many, many decades. This is a very important moment for that.”

Huttenlocher adds

In addition to Huttenlocher, Ozdaglar, and Goldston, the ad hoc committee members are: Daron Acemoglu, Institute Professor and the Elizabeth and James Killian Professor of Economics in the School of Arts, Humanities, and Social Sciences; Jacob Andreas, associate professor in EECS; David Autor, the Ford Professor of Economics; Adam Berinsky, the Mitsui Professor of Political Science; Cynthia Breazeal, dean for Digital Learning and professor of media arts and sciences;

Dylan Hadfield-Menell, the Tennenbaum Career Development Assistant Professor of Artificial Intelligence and Decision-Making; Simon Johnson, the Kurtz Professor of Entrepreneurship in the MIT Sloan School of Management; Yoon Kim, the NBX Career Development Assistant Professor in EECS; Sendhil Mullainathan, the Roman Family University Professor of Computation and Behavioral Science at the University of Chicago Booth School of Business; Manish Raghavan, assistant professor of information technology at MIT Sloan; David Rand, the Erwin H. Schell Professor at MIT Sloan and a professor of brain and cognitive sciences; Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science; and Luis Videgaray, a senior lecturer at MIT Sloan.

The post MIT group releases white papers on governance of AI appeared first on HeshMore.



This post first appeared on Heshmore.com, please read the originial post: here

Share the post

MIT group releases white papers on governance of AI

×

Subscribe to Heshmore.com

Get updates delivered right to your inbox!

Thank you for your subscription

×