Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

5 questions for Google's Yasmin Green

How the next wave of technology is upending the global economy and its power structures
Oct 06, 2023 View in browser
 

By Mohar Chatterjee

With help from Derek Robertson and Mark Scott

PROGRAMMING NOTE: We’ll be off this Monday for Indigenous Peoples' Day but will be back in your inboxes on Tuesday, Oct. 10.

Yasmin Green. | Google

It’s Friday! Good tidings and welcome back to The Future in 5 Questions. Today we have Yasmin Green — CEO of Jigsaw, a division of Google that researches and addresses online toxicity, violent extremism and disinformation. Vogue once called her Google’s "head international troll slayer." A former Booz Allen Hamilton consultant, Green’s previous roles at Google include negotiating deals to syndicate content and heading sales strategy. In the mid-2010s, she co-chaired the European Commission’s working group on online radicalization. She chairs Aspen Digital’s U.S. Cybersecurity Group and serves on the board of the Anti-Defamation League.

Read on to hear Green’s thoughts on productive conflict, the trajectory of deepfakes and using AI to cut through the noise of democratic debate.

This interview has been edited for length and clarity.

What’s one underrated big idea?

With the backdrop that a lot of the discussion about AI and democracy has been negative, the big idea is that AI can be used to facilitate productive discourse.

At Jigsaw, we do a lot of work on how autocrats have actually developed an internet that aligns perfectly with their governing ideology. They have a much easier job because there's no dissent allowed in autocracies. In democracies, we need ideas to come into conflict with each other. And we do have an internet where that happens. But it's not done in a way that promotes understanding or compromise or empathy or any of the things that you want from productive discourse.

There’s a term coined by a Harvard fellow, Aviv Ovadya, called bridging systems — algorithms to bridge, I suppose, the rifts between communities. For the last year, we’ve been thinking about Jigsaw’s Perspective API — a suite of classifiers for moderating comment systems — through the lens of bridging systems. The New York Times, The Wall Street Journal, Wikipedia, Reddit — they all use the API. And the goal is to actually bring people into conflict with opposing viewpoints. You're not trying to resolve or eliminate the conflicts — you're trying to transform it into productive conflict.

So we have these experimental classifiers like constructiveness, personal anecdotes, understanding-seeking. With the latest, very sophisticated LLMs, we're able to identify those attributes of discourse and give moderators the opportunity to to rank on the basis of these new classifiers. People in comment spaces are much happier with this new approach, which I think is an example of a bridging methodology.

What’s a technology you think is overhyped? 

Deepfake detection as a standalone, long-term solution. That's never going to be the foolproof solution, partly because detectors are trained on a dataset that contains deepfakes from a certain set of generators. And the more diverse and prevalent the generators of deepfakes become, the less likely it is that any one detector is going to generalize across all of them. Then, of course, you have motivated bad actors who are expressly trained to circumvent or bypass detection.

The expectation of deepfakes is that eventually they become indistinguishable from real. If you’re creating an image that could have been taken from a camera, then no algorithm is going to be able to detect the difference. And I think that's the trajectory.

What book most shaped your conception of the future?

Ray Kurzweil’s “How to Create a Mind: The Secret of Human Thought Revealed.” It's basically him describing how the brain works and how machine learning is attempting to emulate that. At the time [I read it] I had a young toddler and she was learning things through pattern recognition. And we were developing Perspective API — the machine learning models I mentioned to you — at the same time.

I'd be like, “Wow, the models can evaluate that comment they've never seen before because they've seen so many other comments.” And then I was like, “Wow, my daughter is able to understand how to say that sentence.”

But the funny thing was that, then we'd be like: “What do you mean the training data had this systemic bias?” It ends up that whenever people say the word “Muslim” in comments, it is invariably a negative comment. So the model intuited that “Muslim” had a negative association. And I’m hitting my head against the desk. Then I go home and my daughter uses incorrect grammar. It’s not correct because she was observing, in her learning process, the biased training data of her life. So the whole thing was like fireworks going off for me.

What could government be doing regarding tech that it isn’t?

Within the kind of bridging systems universe, there is this idea of AI to help with deliberative democracy.

The problem statement now is that you cannot have a lot of people participate in a conversation and also be heard. We've just had the U.N. General Assembly week in New York. If the format for these things is you get three minutes of speaking, you have to sit there like a potted plant and wait like an hour and a half for another 30 people to have that three minutes of speaking. We haven't really figured out, either online or offline, a mechanism to have a large number of people participate in a discussion and also have their voices be heard.

The really cool thing is that the Taiwanese government has applied a software called Polis — an open-source software designed to facilitate democratic discourse — in something called vTaiwan. They've actually used it to craft 26 different legislations. They have a really pioneering digital minister, Audrey Tang, who’s doing that. This is a much more constrained, structured conversation. You have a prompt, you're inviting everybody to speak, but they're not replying to each other. If you ask the digital minister of Taiwan, she says one of the main reasons why they don't have toxicity is that people are not replying to each other.

Especially with AI, you can synthesize and visualize people's positions on topics and you can have them vote and have it be iterative. I would love to see the government be a use case for AI to promote democratic outcomes.

What has surprised you most this year?

The internet has been rocked by the changes to Twitter [now renamed X] — its ownership and its name and its policies and its culture. The thing that surprised me is just the sheer number of Twitter clones and spinoffs and micro-blogging websites that have been built in pursuit of the promise Twitter represented.

We really could be innovating away from the Twitter model for speech. What Musk has done is just remove a bunch of the moderation safeguards that were in place. People may be thinking of putting them back. But what about a different design for discourse? I'm surprised that so much resources and energy from brilliant people has gone into essentially emulating the micro-blogging website design of the early 2000s.

 

GO INSIDE THE CAPITOL DOME: From the outset, POLITICO has been your eyes and ears on Capitol Hill, providing the most thorough Congress coverage — from political characters and emerging leaders to leadership squabbles and policy nuggets during committee markups and hearings. We're stepping up our game to ensure you’re fully informed on every key detail inside the Capitol Dome, all day, every day. Start your day with Playbook AM, refuel at midday with our Playbook PM halftime report and enrich your evening discussions with Huddle. Plus, stay updated with real-time buzz all day through our brand new Inside Congress Live feature. Learn more and subscribe here.

 
 
all ai roads run through kyoto

Leaders from the world’s largest economies are heading to Japan next week to hammer out the final details for voluntary global guidelines for the development of generative artificial intelligence.

The main event of the Internet Governance Forum — at least for AI followers — will take place on Oct. 9 when G7 officials hold their one and only stakeholder event before finalizing the so-called Hiroshima Process, or voluntary commitments on generative artificial intelligence. Those plans are slated to be approved by G7 digital ministers, most likely in the second half of November.

Key participants in the discussions include European Commission Vice President Vera Jourova and the U.S. State Department’s Nathaniel Fick.

What exactly are they negotiating over? Three policymakers from G7 countries made clear the underlying principles for generative AI governance are pretty much done. That’s not a surprise, given that G7 countries keep publishing their own domestic (voluntary) policies — this one is from Canada — that are pretty much a copy/paste job of what will be announced in November.

“We recognize the need to manage new risks and challenges for individuals, society and democratic values and harness benefits and opportunities brought by advanced AI systems,” according to a recent G7 document.

Principles are one thing. Less clear is a voluntary code of conduct that G7 officials are putting together around AI transparency, accountability and safety. For the European Union, that would preferably include shoehorning the underlying concepts baked into the bloc’s legally-binding Artificial Intelligence Act into whatever the G7 agrees to.

Not surprisingly, the United States, Japan and the United Kingdom aren’t the biggest fans of that approach. Those countries are more eager to focus on less structured governance compared to what the EU is calling for.

The meeting in Kyoto with industry groups, academics and civil society is to get feedback on the code of conduct before it’s finished. (our colleague, Clothilde Goujard, drew the short straw and will be in Japan from Saturday if you want to get in touch. It certainly is a hardship posting.) Expect companies to push for a focus on long-term risks and exemptions for existing generative AI systems. Expect civil society groups to call for more checks on how the technology is used now. — Mark Scott

sec vs. musk (again)

Elon Musk. | Michel Euler/AP Photo

The Securities and Exchange Commission is suing America’s premier industrial futurist.

The SEC sued Elon Musk yesterday in an attempt to force his testimony on his $44 billion purchase of Twitter last year. The filing says the commission is investigating whether Musk violated securities law. They also say Musk repeatedly skipped meetings scheduled for his testimony, saying “he has no justifiable excuse for his non-compliance with the SEC’s subpoena.”

Musk wrote on X (née Twitter) that “A comprehensive overhaul of these agencies is sorely needed, along with a commission to take punitive action against those individuals who have abused their regulatory power for personal and political gain.” The SEC last investigated Musk over his comments that he would bring Tesla public at $420 a share, settling the case by forcing him to resign as the company’s chairman and pay $40 million in penalties. — Derek Robertson

Tweet of the Day

THE FUTURE IN 5 LINKS
  • Elon Musk says SpaceX could land on Mars within three to four years.
  • An a16z-backed startup wants to mass produce hypersonic weapons.
  • A once-stalwart British chipmaker is struggling to keep up with the AI era.
  • A new plant in Congo is challenging China’s domination over some precious metals.
  • What if the best AI basically acts like a child?

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Enter the “room where it happens”, where global power players shape policy and politics, with Power Play. POLITICO’s brand-new podcast will host conversations with the leaders and power players shaping the biggest ideas and driving the global conversations, moderated by award-winning journalist Anne McElvoy. Sign up today to be notified of new episodes – click here.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to [email protected] by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to unsubscribe.



This post first appeared on Test Sandbox Updates, please read the originial post: here

Share the post

5 questions for Google's Yasmin Green

×

Subscribe to Test Sandbox Updates

Get updates delivered right to your inbox!

Thank you for your subscription

×