Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

A call for AI to stop impersonating people

Presented by Spectrum for the Future: How the next wave of technology is upending the global economy and its power structures
Sep 26, 2023 View in browser
 

By Mohar Chatterjee

Presented by Spectrum for the Future

With help from Steven Overly and Derek Robertson

Don’t forget to sign up for POLITICO’s AI and Tech Summit tomorrow, Sept. 27. We’ll have FTC Chair Lina Khan and a slate of tech leaders, policymakers and national-security officials hashing out the right policies to balance risk and innovation. Sign up for the summit here, whether you plan to attend in person in D.C. or virtually.

A boy in China pointing to a poster of an AI-powered robot. | Getty Images

Should we be concerned that the latest AI technology wants to be more than friends?

One thing we know about generative AI is that it’s really, really good at seeming human. We’ve already heard about a lonesome night watchman treating AI as a companion, and a chatbot that tried to break up a reporter’s marriage. One company, Character.AI, lets users create tailored characters and talk to them.

For some, this is a huge new opportunity: Character.AI raised $150 million in a March funding round. For others it’s a blinking red light about the kind of future we’re heading into.

A new report from the nonprofit Public Citizen rails against what it called the “deceptive anthropomorphism” of AI systems. The venerable consumer-advocacy group has lately taken a big interest in AI; the FEC is considering Public Citizen's petition to create rules about deepfakes in 2024 election campaign advertising.

The report says companies, in their quest to perfect human-like AI for profit, can use the systems to hijack users’ attention and manipulate their feelings. Author Rick Claypool lays out a set of policy recommendations — ranging from banning “counterfeit humans” in commercial transactions to restricting the very techniques that make AI seem human, like first-person pronouns and human-like avatars. The report also suggests applying extra scrutiny and testing on AI systems intended for children, older people and psychologically vulnerable individuals.

The report is eerie — one of the most complete documentations of the race to create human-like AI I’ve seen yet — but the future it’s trying to prevent feels almost inevitable under current market incentives in the U.S. So as a gut-check, I discussed its findings and recommendations with computer scientist Suresh Venkatasubramanian, who co-authored the White House’s AI Bill of Rights during his stint as a science policy adviser in the Biden administration.

“This idea of using various forms of AI to interact with people and provide assistance in various forms — that’s going to happen, I agree,” Venkatasubramanian said. “The question really is what design choices we're going to make in building these systems.”

He calls the Public Citizen report “a call to arms” for the researchers designing AI systems. “We’ll be responsible,” he said, “if we don't think about other ways to design interfaces that are not deceptive, that do create a clear demarcation between an automated system and the person interacting with it.”

But he also cautioned that these systems are going to evolve and that it may be too soon to have an “entire regulatory apparatus” to rein them in.

Still, there are toeholds for existing regulatory agencies to intervene. “If I were the FDA,” Venkatasubramanian said, “I'd be very worried about the next way to solve telehealth by not interacting with a doctor.” Think of an online bot therapist that ingests medical literature and your health conditions and talks to you about your mental health, he said. We’re already part of the way there — health IT giant Epic recently tapped Microsoft to integrate generative AI into its electronic health record software.

Venkatasubramanian worries that the race to replace humans with human-like AI in customer-facing workflows will deepen the digital divide to access critical services. “We'll see more and more rollout of tools in places where we take away human involvement, because it looks like these tools can act like humans. But they really can't. And they'll just make everything a lot more difficult to navigate … Those who are more adept at navigating these tools and working with them will succeed. Those who don’t, won’t.” he said.

In the long run, we could also risk losing the loneliest fringes of our population to the rising tide of AI companions. Venkatasubramanian speculated that the future may not look quite like WALL-E, where humans are lost in their own devices. “Humans have shown we are more social than that,” he said. The growing intimacy between humans and their AI companions isn’t necessarily an effect we can measure in the aggregate. But around the margins of society, “if someone was already a little bit antisocial and or was unable to comfortably interact with other people and found this as an alternative, it's likely to tip them over the edge,” Venkatasubramanian said.

 

A message from Spectrum for the Future:

Wireless spectrum is essential to America’s technology leadership, industrial might, and global competitiveness. A new study from The Brattle Group finds a shared spectrum model opens the door to more competition and 5G innovation among diverse users. Read the new economic study here.

 
three ai challenges facing world leaders

The United Nations flag.

Last week’s U.N. General Assembly highlighted just how focused world leaders are on artificial intelligence. But it’s still unclear if existing institutions are equipped to respond to the sudden explosion in the technology’s growth.

The POLITICO Tech podcast sat down with the German Marshall Fund’s Karen Kornbluh, a former U.S. ambassador to the Organization for Cooperation and Development, to examine takeaways from the U.N. gathering and the stakes if world leaders come up short.

Here are three big challenges Kornbluh identified:

Existing institutions are insufficient: The U.N. has too many members to tackle AI’s thorniest global challenges, Kornbluh said. How can the U.S. and European Union be expected to agree with Russia and China on issues like surveillance or civil liberties? And groups like the G-7 and OECD have too few members. Their like-minded members may find common ground on difficult topics, but their commitments would exclude much of the globe.

Kornbluh says a new organization is needed — she calls it the Technology Task Force — to respond to global dilemmas posed by technology at large and artificial intelligence in particular. If that sounds like a heavy lift, she notes the world has done it before to address issues like nuclear energy and money laundering.

Murky Enforcement: Executing global standards for AI will be complicated by the fact that many countries, including the U.S., lack domestic laws and agencies needed to seriously regulate the industry. This “scaffolding,” as Kornbluh calls it, matters if countries and individual companies are to be held accountable for the harm caused by AI.

“It's a double challenge. It's a challenge for these international organizations and it's a challenge for individual countries also to figure out what the enforcement mechanism is going to be,” she said.

Upsides must be managed, too: Rich nations stand to accrue the benefits of AI, such as medical advances and economic efficiency, while countries without the same resources will not — potentially fueling issues of inequality that global institutions already struggle to redress, Kornbluh said.

“There's a real responsibility on wealthier countries to figure that out,” Kornbluh said. “There's going to need to be funding and richer countries are not necessarily eager to provide more funding.”

Listen to Kornbluh’s full interview on today’s episode of POLITICO Tech. And subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts. — Steven Overly

 

A message from Spectrum for the Future:

 
ai's next act

One of the biggest venture capital firms in Silicon Valley is arguing that AI still has some big evolutionary steps to take before it becomes big business.

A recent blog post from a group of Sequoia Capital authors argues that as astonishing as they are, the generative AI tools that have captured the public’s imagination in recent months are effectively novelties and merely tease the capabilities — in both technology and profit — that AI will have when integrated into the tools we already use for law, business, and even our social lives.

“In short, generative AI’s biggest problem is not finding use cases or demand or distribution, it is proving value,” they write, noting that people tend to use AI apps like ChatGPT for just a couple of months before they drop off, compared to keeping apps like WhatsApp or Telegram for years.

“The path to building enduring businesses will require fixing the retention problem and generating deep enough value for customers that they stick and become daily active users,” Sequoia’s authors write. — Derek Robertson

 

GO INSIDE THE CAPITOL DOME: From the outset, POLITICO has been your eyes and ears on Capitol Hill, providing the most thorough Congress coverage — from political characters and emerging leaders to leadership squabbles and policy nuggets during committee markups and hearings. We're stepping up our game to ensure you’re fully informed on every key detail inside the Capitol Dome, all day, every day. Start your day with Playbook AM, refuel at midday with our Playbook PM halftime report and enrich your evening discussions with Huddle. Plus, stay updated with real-time buzz all day through our brand new Inside Congress Live feature. Learn more and subscribe here.

 
 
Tweet of the Day

THE FUTURE IN 5 LINKS
  • More slow, but steady progress on laser-induced nuclear fusion.
  • Jeff Bezos’ space company Blue Origin has a new CEO.
  • A DeepMind alum has raised $14 million for a new AI startup.
  • Signal’s Meredith Whittaker argues that AI is a “surveillance technology.”
  • AI has made “learn to code” an unintelligible taunt.

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from Spectrum for the Future:

Shared spectrum licensing is key to maximizing value throughout our economy and fostering competition. It provides direct access to spectrum resources for businesses, universities, competing networks, and other key sectors. Discover what’s possible with shared spectrum.

 
 

DON’T MISS POLITICO’S TECH & AI SUMMIT: America’s ability to lead and champion emerging innovations in technology like generative AI will shape our industries, manufacturing base and future economy. Do we have the right policies in place to secure that future? How will the U.S. retain its status as the global tech leader? Join POLITICO on Sept. 27 for our Tech & AI Summit to hear what the public and private sectors need to do to sharpen our competitive edge amidst rising global competitors and rapidly evolving disruptive technologies. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to [email protected] by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to unsubscribe.



This post first appeared on Test Sandbox Updates, please read the originial post: here

Share the post

A call for AI to stop impersonating people

×

Subscribe to Test Sandbox Updates

Get updates delivered right to your inbox!

Thank you for your subscription

×