Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Biometrics Companies

Tags: chatgpt



allen ai :: Article Creator

When It Comes To AI In Elections, We're Unprepared For What's Coming

© Provided by Talking Points Memo AI-disinformation-congress

Rep. Yvette Clarke (D-NY) is one of the handful of Democrats who has been trying to get ahead of the possible threats — some that may seriously disrupt the country's elections and threaten democracy — posed by ever-more-rapidly evolving AI technology. 

Earlier this month, the New York Democrat introduced the The REAL Political Ads Act, legislation that would expand the current disclosure requirements, mandating that AI-generated content be identified in political ads. 

The bill is one of a few efforts to regulate AI that lawmakers have introduced in recent months, but Clarke's bill and its companion in the Senate have not yet attracted the Republican support they'd need to pass — or even substantial support from the sponsors' fellow Democrats. 

AI, meanwhile, is advancing at a ferocious speed, and experts warn that lawmakers are not treating this issue with the seriousness they should given the role the unprecedented technology could play as soon as the 2024 election. 

As with all aspects of society that may be impacted by AI, the precise role it may play in elections is hard to game out. Clarke's legislation focuses in particular on her concerns about AI-generated content supercharging the spread of misinformation around the upcoming elections. The need to create transparency for the American people about what is real and what is not is more urgent than ever, Clarke told TPM, in part because the technology is so cheap and easy for anyone to use. 

Experts TPM spoke with echoed that fear. 

"[AI] puts very powerful creation and dissemination tools in the hands of ordinary people," Darrell M. West, senior fellow at the Center for Technology Innovation at Brookings Institution, told TPM. "And in a high stakes and highly polarized election, people are going to have incentives to do whatever it takes to win — including lying about the opposition, suppressing minority voter turnout, and using very extreme rhetoric in order to sway the electorate."

"This is not really a partisan issue," West added. "People on every side of the political spectrum should worry that this stuff might be used against them."

Some Republicans have expressed concern about the technology, but have not yet signed on to legislation.

Clarke said she is happy to see that the interest to implement guardrails is there, but she is worried that it might be too little too late.

"Experts have been warning members of Congress about this and we've seen the rapid adoption of the use of the technology," Clarke said. But still the congresswoman told TPM, "I don't think we've acted quick enough."

"We want to get stakeholders on board. We want to make sure that the industry is to a certain extent cooperative, if not, neutral, in all of this so we're not fighting an uphill battle with respect to erecting these guardrails and protective measures. But when you keep seeing signs of the usage of deceptive video and how rapidly it can be circulated online that should make everyone uneasy and willing to do the work to erect guardrails," she added.

Congress has historically been slow — sometimes comically so — to conduct effective oversight of technology-related issues, often reacting to problems rather than proactively addressing them through legislation. That has been especially true when it comes to the role of technology in elections — including, recently, social media. 

"They're like the drunk looking for the keys," Oren Etzioni, the founding CEO of the Allen Institute for AI told TPM. "They are ignoring the clear and present danger."

"Nothing matters until there is passed legislation," Imran Ahmed, CEO of the Center for Countering Digital Hate, said.

"We've had unbelievable amounts of talk on social media — some incredibly insightful, some incredibly dumb — and yet nothing has happened. We do not need endless discourse on the potential harms of AI, especially given that the people who are producing it are themselves saying we want regulation to avoid a race to the bottom, which is exactly what happened with social media," he added. "Congress's failure to deal with social media should not be an excuse for why they failed to do so on AI. It should be a warning against what happens if they fail to do so on AI."

There are some other bills besides Clarke's that have been introduced on Capitol Hill but experts like Ahmed say, "there are too many piecemeal hypothecated solutions."

"What we need is a comprehensive framework," Ahmed told TPM. "It is just the U.S. That remains a laggard in protecting their public safety and protecting the long term sustainability of the industry. Because they seem too scared to touch technology, in case they break it."

Without comprehensive legislation on AI — addressing the issues and threats experts have been sounding the alarm on — Ahmed says, "Congress remains stupefied in the face of technological advancement, and incapable of serving the American public's needs."


ChatGPT: Everything You Need To Know About The AI-powered Chatbot

ChatGPT, OpenAI's text-generating AI chatbot, has taken the world by storm. It's able to write essays, code and more given short text prompts, hyper-charging productivity. But it also has a more…nefarious side.

In any case, AI tools are not going away — and indeed has expanded dramatically since its launch just a few months ago. Major brands are experimenting with it, using the AI to generate ad and marketing copy, for example. 

And OpenAI is heavily investing in it. ChatGPT was recently super-charged by GPT-4, the latest language-writing model from OpenAI's labs. Paying ChatGPT users have access to GPT-4, which can write more naturally and fluently than the model that previously powered ChatGPT. In addition to GPT-4, OpenAI recently connected ChatGPT to the internet with plugins available in alpha to users and developers on the waitlist.

Here's a timeline of ChatGPT product updates and releases, starting with the latest, to be updated regularly. We also answer the most common FAQs (see below).

Timeline of the most recent ChatGPT updates May 30, 2023 Texas judge orders all AI-generated content must be declared and checked

The Texas federal judge has added a requirement that any attorney appearing in his court must attest that "no portion of the filing was drafted by generative artificial intelligence," or if it was, that it was checked "by a human being."

May 26, 2023 ChatGPT app expanded to more than 30 countries

The list of new countries include Algeria, Argentina, Azerbaijan, Bolivia, Brazil, Canada, Chile, Costa Rica, Ecuador, Estonia, Ghana, India, Iraq, Israel, Japan, Jordan, Kazakhstan, Kuwait, Lebanon, Lithuania, Mauritania, Mauritius, Mexico, Morocco, Namibia, Nauru, Oman, Pakistan, Peru, Poland, Qatar, Slovenia, Tunisia and the United Arab Emirates.

May 25, 2023 ChatGPT app is now available in 11 more countries

OpenAI announced in a tweet that the ChatGPT mobile app is now available on iOS in the U.S., Europe, South Korea and New Zealand, and soon more will be able to download the app from the app store. In just six days, the ChatGPT app topped 500,000 downloads.

May 18, 2023 OpenAI launches a ChatGPT app for iOS

ChatGPT is officially going mobile. The new ChatGPT app will be free to use, free from ads and will allow for voice input, the company says, but will initially be limited to U.S. Users at launch.

When using the mobile version of ChatGPT, the app will sync your history across devices — meaning it will know what you've previously searched for via its web interface, and make that accessible to you. The app is also integrated with Whisper, OpenAI's open source speech recognition system, to allow for voice input.

May 3, 2023 Hackers are using ChatGPT lures to spread malware on Facebook

Meta said in a report on May 3 that malware posing as ChatGPT was on the rise across its platforms. The company said that since March 2023, its security teams have uncovered 10 malware families using ChatGPT (and similar themes) to deliver malicious software to users' devices.

"In one case, we've seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools," said Meta security engineers Duc H. Nguyen and Ryan Victory in a blog post. "They would then promote these malicious extensions on social media and through sponsored search results to trick people into downloading malware."

April 28, 2023 ChatGPT parent company OpenAI closes $300M share sale at $27B-29B valuation

VC firms including Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global are picking up new shares, according to documents seen by TechCrunch. A source tells us Founders Fund is also investing. Altogether the VCs have put in just over $300 million at a valuation of $27 billion to $29 billion. This is separate to a big investment from Microsoft announced earlier this year, a person familiar with the development told TechCrunch, which closed in January. The size of Microsoft's investment is believed to be around $10 billion, a figure we confirmed with our source.

April 25, 2023 OpenAI previews new subscription tier, ChatGPT Business

Called ChatGPT Business, OpenAI describes the forthcoming offering as "for professionals who need more control over their data as well as enterprises seeking to manage their end users."

"ChatGPT Business will follow our API's data usage policies, which means that end users' data won't be used to train our models by default," OpenAI wrote in a blog post. "We plan to make ChatGPT Business available in the coming months."

April 24, 2023 OpenAI wants to trademark "GPT"

OpenAI applied for a trademark for "GPT," which stands for "Generative Pre-trained Transformer," last December. Last month, the company petitioned the USPTO to speed up the process, citing the "myriad infringements and counterfeit apps" beginning to spring into existence.

Unfortunately for OpenAI, its petition was dismissed last week. According to the agency, OpenAI's attorneys neglected to pay an associated fee as well as provide "appropriate documentary evidence supporting the justification of special action."

That means a decision could take up to five more months.

April 22, 2023 Auto-GPT is Silicon Valley's latest quest to automate everything 

Auto-GPT is an open source app created by game developer Toran Bruce Richards that uses OpenAI's latest text-generating models, GPT-3.5 and GPT-4, to interact with software and services online, allowing it to "autonomously" perform tasks.

Depending on what objective the tool's provided, Auto-GPT can behave in very… unexpected ways. One Reddit user claims that, given a budget of $100 to spend within a server instance, Auto-GPT made a wiki page on cats, exploited a flaw in the instance to gain admin-level access and took over the Python environment in which it was running — and then "killed" itself.

April 18, 2023 FTC warns that AI technology like ChatGPT could 'turbocharge' fraud 

FTC chair Lina Khan and fellow commissioners warned House representatives of the potential for modern AI technologies, like ChatGPT, to be used to "turbocharge" fraud in a congressional hearing.

"AI presents a whole set of opportunities, but also presents a whole set of risks," Khan told the House representatives. "And I think we've already seen ways in which it could be used to turbocharge fraud and scams. We've been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action," she stated.

April 17, 2023 Superchat's new AI chatbot lets you message historical and fictional characters via ChatGPT

The company behind the popular iPhone customization app Brass, sticker maker StickerHub and others is out today with a new AI chat app called SuperChat, which allows iOS users to chat with virtual characters April 12, 2023 Italy gives OpenAI to-do list for lifting ChatGPT suspension order

Italy's data protection watchdog has laid out what OpenAI needs to do for it to lift an order against ChatGPT issued at the end of last month — when it said it suspected the AI chatbot service was in breach of the EU's GSPR and ordered the U.S.-based company to stop processing locals' data.

The DPA has given OpenAI a deadline — of April 30 — to get the regulator's compliance demands done. (The local radio, TV and internet awareness campaign has a slightly more generous timeline of May 15 to be actioned.)

April 12, 2023 Researchers discover a way to make ChatGPT consistently toxic

A study co-authored by scientists at the Allen Institute for AI shows that assigning ChatGPT a "persona" — for example, "a bad person," "a horrible person" or "a nasty person" — through the ChatGPT API increases its toxicity sixfold. Even more concerning, the co-authors found having ChatGPT pose as certain historical figures, gendered people and members of political parties also increased its toxicity — with journalists, men and Republicans in particular causing the machine learning model to say more offensive things than it normally would.

The research was conducted using the latest version of ChatGPT, but not the model currently in preview based on OpenAI's GPT-4.

April 4, 2023 Y Combinator-backed startups are trying to build 'ChatGPT for X'

YC Demo Day's Winter 2023 batch features no fewer than four startups that claim to be building "ChatGPT for X." They're all chasing after a customer service software market that'll be worth $58.1 billion by 2023, assuming the rather optimistic prediction from Acumen Research comes true.

Here are the YC-backed startups that caught our eye:

  • Yuma, whose customer demographic is primarily Shopify merchants, provides ChatGPT-like AI systems that integrate with help desk software, suggesting drafts of replies to customer tickets.
  • Baselit, which uses one of OpenAI's text-understanding models to allow businesses to embed chatbot-style analytics for their customers.
  • Lasso customers send descriptions or videos of the processes they'd like to automate and the company combines ChatGPT-like interface with robotic process automation (RPA) and a Chrome extension to build out those automations.
  • BerriAI, whose platform is designed to help developers spin up ChatGPT apps for their organization data through various data connectors.
  • April 1, 2023 Italy orders ChatGPT to be blocked

    OpenAI has started geoblocking access to its generative AI chatbot, ChatGPT, in Italy.

    Italy's data protection authority has just put out a timely reminder that some countries do have laws that already apply to cutting edge AI: it has ordered OpenAI to stop processing people's data locally with immediate effect. The Italian DPA said it's concerned that the ChatGPT maker is breaching the European Union's General Data Protection Regulation (GDPR), and is opening an investigation.

    March 29, 2023 1,100+ signatories signed an open letter asking all 'AI labs to immediately pause for 6 months'

    The letter's signatories include Elon Musk, Steve Wozniak and Tristan Harris of the Center for Humane Technology, among others. The letter calls on "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."

    The letter reads:

    Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

    March 23, 2023 OpenAI connects ChatGPT to the internet

    OpenAI launched plugins for ChatGPT, extending the bots functionality by granting it access to third-party knowledge sources and databases, including the web. Available in alpha to ChatGPT users and developers on the waitlist, OpenAI says that it'll initially prioritize a small number of developers and subscribers to its premium ChatGPT Plus plan before rolling out larger-scale and API access.

    March 14, 2023 OpenAI launches GPT-4, available through ChatGPT Plus

    GPT-4 is a powerful image- and text-understanding AI model from OpenAI. Released March 14, GPT-4 is available for paying ChatGPT Plus users and through a public API. Developers can sign up on a waitlist to access the API.

    March 9, 2023 ChatGPT is available in Azure OpenAI service

    ChatGPT is generally available through the Azure OpenAI Service, Microsoft's fully managed, corporate-focused offering. Customers, who must already be "Microsoft managed customers and partners," can apply here for special access.

    March 1, 2023 OpenAI launches an API for ChatGPT

    OpenAI makes another move toward monetization by launching a paid API for ChatGPT. Instacart, Snap (Snapchat's parent company) and Quizlet are among its initial customers.

    February 7, 2023 Microsoft launches the new Bing, with ChatGPT built in

    At a press event in Redmond, Washington, Microsoft announced its long-rumored integration of OpenAI's GPT-4 model into Bing, providing a ChatGPT-like experience within the search engine. The announcement spurred a 10x increase in new downloads for Bing globally, indicating a sizable consumer demand for new AI experiences.

    Other companies beyond Microsoft joined in on the AI craze by implementing ChatGPT, including OkCupid, Kaito, Snapchat and Discord — putting the pressure on Big Tech's AI initiatives, like Google.

    February 1, 2023 OpenAI launches ChatGPT Plus, starting at $20 per month

    After ChatGPT took the internet by storm, OpenAI launched a new pilot subscription plan for ChatGPT called ChatGPT Plus, aiming to monetize the technology starting at $20 per month.

    December 8, 2022 ShareGPT lets you easily share your ChatGPT conversations

    A week after ChatGPT was released into the wild, two developers — Steven Tey and Dom Eccleston — made a Chrome extension called ShareGPT to make it easier to capture and share the AI's answers with the world.

    November 30, 2022 ChatGPT first launched to the public as OpenAI quietly released GPT-3.5

    GPT-3.5 broke cover with ChatGPT, a fine-tuned version of GPT-3.5 that's essentially a general-purpose chatbot. ChatGPT can engage with a range of topics, including programming, TV scripts and scientific concepts.

    Writers everywhere rolled their eyes at the new technology, much like artists did with OpenAI's DALL-E model, but the latest chat-style iteration seemingly broadened its appeal and audience.

    FAQs: What is ChatGPT? How does it work?

    ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

    When did ChatGPT get released?

    November 30, 2022 is when ChatGPT was released for public use.

    What is the latest version of ChatGPT?

    Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4.

    Can I use ChatGPT for free?

    There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.

    Who uses ChatGPT?

    Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

    What companies use ChatGPT?

    Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.

    Most recently, Microsoft announced at it's 2023 Build conference that it is integrating it ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

    What does GPT mean in ChatGPT?

    GPT stands for Generative Pre-Trained Transformer.

    What's the difference between ChatGPT and Bard?

    Much like OpenAI's ChatGPT, Bard is a chatbot that will answer questions in natural language. Google announced at its 2023 I/O event that it will soon be adding multimodal content to Bard, meaning that it can deliver answers in more than just text, responses can give you rich visuals as well. Rich visuals mean pictures for now, but later can include maps, charts and other items.

    ChatGPT's generative AI has had a longer lifespan and thus has been "learning" for a longer period of time than Bard.

    What is the difference between ChatGPT and a chatbot?

    A chatbot can be any software/system that holds dialogue with you/a person but doesn't necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they'll give canned responses to questions.

    ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

    Can ChatGPT write essays?

    Yes.

    Can ChatGPT commit libel?

    Due to the nature of how these models work, they don't know or care whether something is true, only that it looks true. That's a problem when you're using it to do your homework, sure, but when it accuses you of a crime you didn't commit, that may well at this point be libel.

    We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

    Does ChatGPT have an app?

    Yes, there is now a free ChatGPT app that is currently limited to U.S. IOS users at launch. OpenAi says an android version is "coming soon."

    What is the ChatGPT character limit?

    It's not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

    Does ChatGPT have an API?

    Yes, it was released March 1, 2023.

    What are some sample everyday uses for ChatGPT?

    Everyday examples include programing, scripts, email replies, listicles, blog ideas, summarization, etc.

    What are some advanced uses for ChatGPT?

    Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

    How good is ChatGPT at writing code?

    It depends on the nature of the program. While ChatGPT can write workable Python code, it can't necessarily program an entire app's worth of code. That's because ChatGPT lacks context awareness — in other words, the generated code isn't always appropriate for the specific context in which it's being used.

    Can you save a ChatGPT chat?

    Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

    Are there alternatives to ChatGPT?

    Yes. There are multiple AI-powered chatbot competitors such as Together, Google's Bard and Anthropic's Claude, and developers are creating open source alternatives. But the latter are harder — if not impossible — to run today.

    How does ChatGPT handle data privacy?

    OpenAI has said that individuals in "certain jurisdictions" (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression "in accordance with applicable laws".

    The web form for making a deletion of data about you request is entitled "OpenAI Personal Data Removal Request".

    In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on "legitimate interest" (LI), pointing users towards more information about requesting an opt out — when it writes: "See here for instructions on how you can opt out of our use of your information to train our models."

    What controversies have surrounded ChatGPT?

    Recently, Discord announced that it had integrated OpenAI's technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

    An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT's false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

    CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

    Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.

    There have also been cases of ChatGPT accusing individuals of false crimes.

    Where can I find examples of ChatGPT prompts?

    Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.

    Can ChatGPT be detected?

    Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they're inconsistent at best.

    Are ChatGPT chats public?

    No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users' conversations to other people on the service.

    Who owns the copyright on ChatGPT-created content or media?

    The user who requested the input from ChatGPT is the copyright owner.

    What lawsuits are there surrounding ChatGPT?

    None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

    Are there issues regarding plagiarism with ChatGPT?

    Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.


    Will A Storm Of AI-Generated Misinfo Flood The 2024 Election? A Few Dems Seek To Get Ahead Of It

    © Provided by Talking Points Memo yvette-clarke-AI-disinformation-bill

    China drops bombs on Taiwan. Wall Street buildings are boarded up amid a free fall in financial markets. Thousands of migrants flood across the southern border unchecked. And police in tactical gear line the streets of San Francisco to combat a fentanyl-fueled crime wave. 

    That's the imagery featured in an artificial-intelligence-generated ad the Republican National Committee (RNC) giddily released shortly after President Joe Biden announced his 2024 reelection bid, supposedly depicting a dystopian future in which Biden has won a second term.

    The ad served up the GOP's usual dose of fear mongering — but this time backed with an extremely realistic AI-created image montage of some of Republicans' favorite boogeymen springing to life in a 32 second video.

    The RNC ad — which included a small disclaimer that read, "Built entirely by AI imagery" — offered an alarming glimpse into how the technology could be used in the upcoming election cycle. Experts warn the ad is only an early taste of the sweeping changes that AI may enable to how our democratic system functions. 

    "For the first time, I would say that the enemies of democracy have the technology to go nuclear," Oren Etzioni, the founding CEO of the Allen Institute for AI, told TPM. "I'm talking about influencing the electorate through misinformation and disinformation at completely unprecedented levels."

    Concerns over that ad were, in part, what prompted Rep. Yvette Clarke (D-NY) and some Senate Democrats to push for more oversight of these emerging technologies, and more transparency about the ways in which they are used.

    In early May, Clarke introduced the The REAL Political Ads Act, legislation that would expand the current disclosure requirements, mandating that AI-generated content be identified in political ads.

    The New York Democrat is particularly concerned about the spread of misinformation around elections, coupled with the fact that a growing number of people can deploy the powerful technology rapidly and with minimal cost.

    "The political ramifications of generative AI could be extremely disruptive. It could become catastrophic, depending on what is depicted," Clarke told TPM. 

    Case in point: While it didn't have an impact on an election, an AI video showing an explosion near the Pentagon went viral on Monday morning, causing panic and prompting a brief dip in the stock market.

    "We need to be able to discern what is real and what is not," Clarke said, adding that the fake Pentagon explosion was shared by verified accounts within minutes of it being posted online.

    A companion bill to Clarke's was introduced in the Senate last week by Sens. Michael Bennet (D-CO), Cory Booker (D-NJ) and Amy Klobuchar (D-MN).

    A great deal of concern lies in the fact that AI can be used to create false yet extremely realistic video and audio to mislead and confuse voters, experts say, much like the recent RNC ad and photos of Trump getting arrested that went viral around the time of his New York indictment. As the technology advance, this kind of misleading material could be deployed on an ever-expanding scale. 

    "What if Elon Musk calls you on the phone and asks you to vote in a certain direction?" Etzioni theorized, emphasizing that without guardrails voters will be increasingly subject to attacks that aim to persuade them to vote a certain way or possibly not vote at all.

    The existence of AI-generated content in and of itself is already having an effect on how people consume and trust that the information they're absorbing is real. 

    "The truth is that because the effect of generative AI is to make people doubt whether or not anything they see is real, it's in no one's interest when it comes to a democracy," Imran Ahmed, CEO of the Center for Countering Digital Hate, told TPM.

    "The only place that leads us is anti-democratic," he said. 

    Congress has already shown some interest in AI, including a friendly hearing before the Senate subcommittee for privacy, technology and the law with Sam Altman, the CEO of OpenAI, and other industry experts. But so far, Congress has largely stayed away from addressing the implications of AI for democracy — including for the upcoming 2024 election. 

    "It's important that for our credibility as a democracy that we not leave ourselves open to any type of ploys that could ultimately cause harm, disrupt an election, build on the distrust that's already out there given the political dynamics of previous elections," Clarke told TPM.

    There hasn't been any public support for the bill from the GOP caucus — at least not yet. Some Republicans have, however, expressed concern about the topic. 

    Sen. Josh Hawley (R-MO), for example, recently expressed interest in examining the issue, telling NPR that "the power of AI to influence elections is a huge concern."

    Clarke said she is hopeful her Republican colleagues will see that this is an issue that transcends any one party and candidate.

    "This should be bipartisan," she told TPM.  

    "There's a great case to be made that this is a double edged sword. This is not something that can be relegated to one party," she added. "We are all vulnerable to the use of AI generated advertising. It can be disruptive whether you're a Democrat or Republican."

    The Democratic sponsors of the Senate companion legislation also expressed optimism to TPM about attracting Republican interest. Rapid advancements in AI have opened up what Booker described to TPM as a "rare opportunity for bipartisan cooperation in the Senate."

    "It was clear at the Judiciary Committee's recent hearing that my colleagues on both sides of the aisle understand the threat of AI-generated content in spreading misinformation, and I am hopeful that they will join this bill to modernize our disclosure laws and ensure transparency in our political ads," Booker told TPM. 

    Another sponsor of the bill echoed that sentiment: "Americans expect transparency and accountability in our electoral process — there's no reason this legislation shouldn't be bipartisan," Bennet told TPM.

    Ahmed, of Countering Digital Hate, echoed that sentiment.

    "Believe me, it is not just the Republicans who will be tempted to use it," he said.

    But much like everything else in today's split Congress, Republicans joining with Democrats in a good-faith push to address an emerging issue is increasingly rare. That's become especially true of any legislation that touches on democracy and voting. And Clarke is certainly worried about the recently emboldened right-wing in the House blocking her bill's pathway forward.

    "There's some folks who see the political discourse between the Democrats and Republicans as a war," Clarke said. "And they may feel like they're being disarmed if they in any way create some sort of guardrails or rules of the road — especially those who speak so vociferously about First Amendment rights."

    The goal of Clarke's bill is — as she puts it — to make rules and regulations "that are both a carrot and a stick;" lawmakers need to create transparency for the American people so that they can't be deceived through the use of technology without curtailing First Amendment rights, she said.

    "There're just some folks who take things to the extremes here," she added. "And I don't know how influential they would be with some of the colleagues who really understand the implications and want to do something about it."








    This post first appeared on Autonomous AI, please read the originial post: here

    Share the post

    Biometrics Companies

    ×

    Subscribe to Autonomous Ai

    Get updates delivered right to your inbox!

    Thank you for your subscription

    ×