Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Accountants to the AI rescue

Presented by CTIA – The Wireless Association: How the next wave of technology is upending the global economy and its power structures
Oct 23, 2023 View in browser
 

By Ben Schreckinger

Presented by CTIA – The Wireless Association

With help from Derek Robertson

The OpenAI logo on a phone and the ChatGPT interface. | AP

In search of ideas for ensuring a safe future with increasingly powerful AI, people have looked to lawmakers, coders, scientists, philosophers and activists.

They may be overlooking the most important inspiration of all: Accountants.

New polling shared first with DFD finds that a wonky policy idea enjoys surprising popularity among American adults: requiring mandatory safety Audits of AI models before they can be released.

Audits as a way to control AI don't literally involve accountants; they're an evolving idea for how to independently assess the risks of a new system. Like financial audits, they aren’t exactly sexy, especially when more dramatic responses like bans, nationalization, and new Manhattan Projects are on the table. That may explain why they have not played an especially prominent role in policy discourse.

“It's under-represented, under-understood,” said Ryan Carrier, a chartered financial analyst who advocates for AI audits.

But the Artificial Intelligence Policy Institute — a new think tank focused on existential AI risk — found that when it asked about 11 potential AI policy responses in head-to-head preference questions, respondents chose the AI safety audit idea over others two-thirds of the time (making it second only to the vaguer response of “Preventing dangerous and catastrophic outcomes”).

In fact, the idea of government-mandated audits of digital technology is already starting to gain traction. The EU’s year-old Digital Services Act mandates that the largest online platforms — like Amazon, YouTube, and Wikipedia — submit to annual independent audits of their compliance with its provisions.

And an AI policy framework unveiled last month by senators Josh Hawley of Missouri, a Republican, and Richard Blumenthal of Connecticut, a Democrat, calls for an independent oversight body to license and audit risky models.

AI Policy Institute’s founder, Daniel Colson, said he decided to include audits among the policy responses after finding it was popular in surveys of experts, including one published in May by the Center for Governance of AI, a nonprofit that was spun off of Oxford’s Future of Humanity Institute.

“It’s in the sweet spot of something that’s maybe feasible but also a major priority of the safety community,” Colson said.

How would this actually work? It turns out that AI safety proponents have been fleshing the idea out for years.

Pre-release audits break down into two main types: Pre-deployment audits, which examine the plans for the AI model, and post-deployment audits, which examine the functioning of a model after it’s been built but before it’s been put into use in the real world.

As for the legal and procedural framework, one version of the idea calls for replicating the system that already exists in the financial world, where public companies must submit to audits by independently certified accountants who are liable for their conclusions.

“If you adapt it correctly from financial audits, it’s got a 50-year track record,” said Carrier, who founded a charity, ForHumanity, in 2016 to develop an infrastructure, like standards and auditor exams, for AI audits.

A former hedge fund operator, Carrier said he got an up-close look at the potential for unregulated AI to run amok when his fund began building it into its automated trading tools. In addition to a pre-release safety audit, Carrier said that high-risk AI systems — which can evolve over time — should be subjected to annual independent audits, just as publicly traded companies are.

Of course, auditing only goes so far: As fiascos like the Enron implosion have made clear in the world of finance, even strict auditing requirements aren't foolproof. They can be thwarted by the old-school pitfalls of corporate fraud or greed, with auditing firms going easy on important clients.

And AI "audits" have a unique layer of complexity. Because even the designers of large language models don't fully understand their inner workings, the models themselves cannot be audited directly, said Ben Schneiderman, a professor emeritus of computer science at the University of Maryland and the author of “Human-Centered AI.”

Schneiderman — who along with Carrier and 18 other researchers authored a 2021 paper calling for AI audits — said that instead, auditors would need access to a model’s training data, and would then need to rely on observations of the model’s inputs and outputs.

At a time when much U.S. public polling shows low trust in government bodies and high levels of anxiety about AI, the idea of farming out supervision of an opaque technology to a standardized process, rather than a powerful agency, could have legs.

“I like the phrase ‘independent oversight,’” Schneiderman said.

 

A message from CTIA – The Wireless Association:

China is pushing countries to adopt their 5G Spectrum vision and build a global market that favors their tech companies. To counter China’s ambitions, we need our own compelling vision for U.S. spectrum leadership over the next decade, and a clear commitment to make more 5G spectrum available. For our economic competitiveness, our national security, and our 5G leadership, America needs a bold new National Spectrum Strategy. Learn more.

 
ai eo skepticism

We don’t know exactly what’s in the Biden administration’s upcoming executive order on AI, although there have been a few hints.

But Matthew Mittelsteadt, a research fellow and technologist at the free-market-oriented Mercatus Institute, has some recommendations. In a new Substack post, Mittelsteadt wrote that some of the breadcrumbs that have been dropped so far — the ones aimed at shaping America’s societal relationship with AI — are troubling, echoing some industry fears about AI regulation in general.

He argues that taking the focus off the basic, yet tricky, task of federal procurement will create “competing, inconsistent and sometimes contradictory goals which will water down the overall impact while hampering procurements with competing incentives,” and advises that government should “just [focus] on responsibly diffusing AI in the federal government and improving how the government does business.”

Furthermore, Mittelsteadt says that a more focused order can accomplish the rest of the administration’s goals: “If executed well, an executive order that focuses on and succeeds at creating a government that adopts and diffuses AI quickly and in the ‘right’ way can still shape markets,” he writes. “Given its size and scope, the government has extensive capacity to play with this tech, iterate, experiment, and discover what rules, designs and norms work best. If the resulting AI truly is a joy to use… people will take notice, expectations will be set, and perhaps this performance standard will be adopted widely.” — Derek Robertson

 

A message from CTIA – The Wireless Association:

 
eeoc flexing

The EEOC seal. | David Zalubowski/AP Photo

AI in the workplace could be a “new civil rights frontier,” according to Charlotte Burrows, chair of the Equal Employment Opportunity Commission.

POLITICO’s Olivia Olander writes in today’s Weekly Shift newsletter that the Biden administration’s EEOC is staking its claim to AI governance under its newly appointed Democratic majority, giving it the opportunity to make policy at the tech’s frontier.

Victoria Lipnic, former acting GOP EEOC chair and a partner at Resolution Economics, told Olivia that the EEOC could update the hiring guidelines established by the federal government that haven’t been updated since the 1970s. The EEOC has already established guidelines to prevent AI from violating the Americans with Disabilities Act, or the discrimination provisions of the Civil Rights Act.

“People’s attention tends to come about as a result of some kind of cataclysmic event,” Michael Lotito, co-chair of the Workplace Policy Institute at the firm Littler Mendelson, which represents employers, told Olivia. “AI is the new toy, and I’m sure that the employment lawyers and the regulators will have a great time playing with it.” — Derek Robertson

 

JOIN 10/25 FOR A TALK ON THE FUTURE OF GRID RELIABILITY: The EPA’s proposed standards for coal and new natural gas fired power plants have implications for the future of the electric grid. These rules may lead to changes in the power generation mix—shifting to more renewable sources in favor of fossil-fuel plants. Join POLITICO on Oct. 25 for a deep-dive conversation on what it will take to ensure a reliable electric grid for the future. REGISTER NOW.

 
 
Tweet of the Day

THE FUTURE IN 5 LINKS
  • Some college students are dropping out to chase AI money.
  • …While that same boom is giving new life to San Francisco.
  • Amazon’s new bipedal robots could be a pivot point for the industry.
  • Companies are using AI to give food more accurate expiration dates.
  • Read a capsule history of quantum cryptography.

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from CTIA – The Wireless Association:

America’s spectrum policy is stuck in neutral. The FCC’s spectrum auction authority has not been renewed, there is no pipeline of new spectrum for 5G, and China is poised to dominate global spectrum discussions, pushing for 15X more 5G spectrum than the U.S. America cannot afford to fall behind and become a spectrum island. The Biden Administration’s forthcoming National Spectrum Strategy is a unique and important opportunity to recommit ourselves to a bold vision for global spectrum leadership, secure our 5G leadership today and long-term leadership of the industries and innovations of the future. For our economic competitiveness and our national security, we need a National Spectrum Strategy that is committed to allocating 1500 MHz of new mid-band spectrum for 5G, and that reaffirms the critical role that NTIA and the FCC play in leading the nation’s spectrum policy. Learn more.

 
 

WSF 2023 will discuss ‘Mastering the New Economy’, examining the ways in which business and society can thrive despite current economic and environmental challenges. The conference will gather 100+ speakers from companies including Volkswagen, Siemens and C3.ai, as well as U.S. Senator for Tennessee Bill Hagerty; Florida’s Chief Financial Officer Jimmy Patronis; Former President of Colombia Iván Duque Márquez and Former President of Ecuador Jamil Mahuad. Learn more and register now at www.worldstrategicforum.com.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to [email protected] by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to unsubscribe.



This post first appeared on Test Sandbox Updates, please read the originial post: here

Share the post

Accountants to the AI rescue

×

Subscribe to Test Sandbox Updates

Get updates delivered right to your inbox!

Thank you for your subscription

×