Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Bennet Pushes Tech Companies to Crack Down on Disinformation Amid Israel-Hamas Conflict

Washington, D.C. — Colorado U.S. Senator Michael Bennet wrote to the leaders of X, Meta, TikTok, and Alphabet to urge them to stop the spread of false and misleading content related to the ongoing Conflict between Israel and Hamas.

“According to numerous reports, deceptive content has ricocheted across social media sites since the conflict began, sometimes receiving millions of views,” wrote Bennet. “In many cases, your platforms’ algorithms have amplified this content, contributing to a dangerous cycle of outrage, engagement, and redistribution.”

In his letter, Bennet identifies examples of alarming content that social media platforms have allowed to spread– including claims that Ukraine supplied weapons to Hamas and that Hamas had captured a top Israeli general. Over the past year, many of these companies laid off hundreds of staff who focused on content moderation or combatting disinformation.

“These decisions contribute to a cascade of violence, paranoia, and distrust around the world. Your platforms are helping produce an information ecosystem in which basic facts are increasingly in dispute, while untrustworthy sources are repeatedly designated as authoritative,” continued Bennet.

Bennet calls on these companies to provide information about the types of content that have been removed, the policies in place to reduce false and misleading content related to the conflict between Israel and Hamas, and efforts to limit the spread of posts glorifying hate speech or terrorism.

Bennet has pushed for stronger standards to stop the spread of deceptive content online. Bennet was the first senator to propose creating an expert federal body to regulate digital platforms with his Digital Platform Commission Act. In June, Bennet called on major technology companies to identify and label AI-generated content. In March, he urged the CEOs of leading technology companies to protect younger users as they deploy AI chatbots. 

The text of the letter is available HERE and below.

Dear Mr. Musk, Mr. Zuckerberg, Mr. Chew, and Mr. Pichai:

I write with concerns over false and misleading content on your platforms related to the ongoing conflict between Israel and the terrorist organization Hamas. 

According to numerous reports, deceptive content has ricocheted across social media sites since the conflict began, sometimes receiving millions of views.  These lies include claims that Ukraine had supplied weapons to Hamas; Hamas had captured a top Israeli general; Russian president Vladimir Putin had warned the U.S. against intervention; and crisis actors were masquerading as victims.  Video game footage has been passed off as genuine, and old videos from other conflicts have been reposted purporting to depict current events.  

In many cases, your platforms’ algorithms have amplified this content, contributing to a dangerous cycle of outrage, engagement, and redistribution. Your platforms have made particular design decisions that hamper your ability to identify and remove illegal and dangerous content. Platforms have decided to amplify paying users’ posts, optimize for engagement rather than veracity, and offload fact-checking and content moderation responsibilities to third-parties. In November 2022, X, then Twitter, laid off 15 percent of its trust and safety department,  and dissolved the company’s Trust and Safety Council.  In January 2023, Meta terminated around 175 content moderators’ contracts, and reportedly laid off 100 positions focused on trust, integrity and responsibility.  In February 2023, Google reduced its team developing tools to counter online hate speech and disinformation by one-third.  Just last month, X slashed about half of the platform’s global team dedicated to reducing disinformation and election fraud, including that team’s leader.  

These decisions contribute to a cascade of violence, paranoia, and distrust around the world. Your platforms are helping produce an information ecosystem in which basic facts are increasingly in dispute, while untrustworthy sources are repeatedly designated as authoritative. According to a Pew survey from last year, U.S. adults under the age of 30 trust the information they see on social media platforms about as much as the information they receive from reputable news organizations.  And, the emergence of sophisticated generative artificial intelligence models threatens to further undermine claims to accuracy and credibility. 

Last week, the European Union sent enforcement letters to your companies and requested information on the actions your platforms have taken to remove illegal content and disinformation, respond to user and law enforcement complaints, and adopt effective risk assessment and mitigation measures.  Although certain platforms have initiated some moderation, the mountain of false content clearly demonstrates that your current policies and protocols are inadequate. 

I urge you to take immediate and concerted efforts to address the scale and speed at which disinformation is circulating, and request responses to the following questions by October 31, 2023:

  • How many pieces of content have you removed related to the current conflict between Israel and Hamas? Please indicate the categories of content removed. 
  • How many pieces of content have you removed related to the current conflict between Israel and Hamas that were identified by your internal content moderation systems? 
    • How many were identified and flagged by third-parties, including users, partner organizations, or other sources?

  • What specific new policies, procedures, or resources have you developed in response to the current conflict between Israel and Hamas? 
  • How many employees do you currently have dedicated to content moderation? 
    • Of these, how many are contractors?
    • Of these, how many are dedicated to moderating non-English language content?

  • What machine-learning or other automated moderation systems do you have in place to detect and flag potentially violating content? 
  • What policies do you have in place regarding terrorist content, violent content, content glorifying terrorist organizations, hate speech, or demonstrably false or misleading content? When were these policies last reviewed? 

Thank you for your attention to this important matter. 

Sincerely,

The post Bennet Pushes Tech Companies to Crack Down on Disinformation Amid Israel-Hamas Conflict appeared first on Top World News Today.



This post first appeared on World News Headlines, Live News, Breaking News - Topworldnewstoday.com, please read the originial post: here

Share the post

Bennet Pushes Tech Companies to Crack Down on Disinformation Amid Israel-Hamas Conflict

×

Subscribe to World News Headlines, Live News, Breaking News - Topworldnewstoday.com

Get updates delivered right to your inbox!

Thank you for your subscription

×