Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business


New York
CNN
 — 

As the Israel-Hamas war reaches the end of its first week, millions have turned to platforms including TikTok and Instagram in hopes of comprehending the brutal conflict in real time. Trending search terms on TikTok in recent days illustrate the hunger for frontline perspectives: From “graphic Israel footage” to “live stream in Israel right now,” internet users are seeking out raw, unfiltered accounts of a crisis they are desperate to understand.

For the most part, they are succeeding, discovering videos of tearful Israeli children wrestling with the permanence of death alongside images of dazed Gazans sitting in the rubble of their former homes. But that same demand for an intimate view of the war has created ample openings for disinformation peddlers, conspiracy theorists and propaganda artists — malign influences that regulators and researchers now warn pose a dangerous threat to public debates about the war.

One recent TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attacks, including false claims that they were orchestrated by the Media. Another, viewed more than 100,000 times, shows a clip from the video game “Arma 3” with the caption, “The war of Israel.” (Some users in the comments of that video noted they had seen the footage circulating before — when Russia invaded Ukraine.)

TikTok is hardly alone. One post on X, formerly Twitter, was viewed more than 20,000 times and flagged as misleading by London-based Social Media watchdog Reset for purporting to show Israelis staging civilian deaths for cameras. Another X post the group flagged, viewed 55,000 times, was an antisemitic meme featuring Pepe the Frog, a cartoon that has been appropriated by far-right white supremacists. On Instagram, a widely shared and viewed video of parachuters dropping in on a crowd and captioned “imagine attending a music festival when Hamas parachutes in” was debunked over the weekend and, in fact, showed unrelated parachute jumpers in Egypt. (Instagram later labeled the video as false.)

This week, European Union officials sent warnings to TikTok, Facebook and Instagram-parent Meta, YouTube and X, highlighting reports of misleading or illegal content about the war on their platforms and reminding the Social media companies they could face billions of dollars in fines if an investigation later determines they violated EU content moderation laws. US and UK lawmakers have also called on those platforms to ensure they are enforcing their rules against hateful and illegal content.

Since the violence in Israel began, Imran Ahmed, founder and CEO of the social media watchdog group Center for Countering Digital Hate, told CNN his group has tracked a spike in efforts to pollute the information ecosystem surrounding the conflict.

“Getting information from social media is likely to lead to you being severely disinformed,” said Ahmed.

Everyone from US foreign adversaries to domestic extremists to internet trolls and “engagement farmers” has been exploiting the war on social media for their own personal or political gain, he added.

“Bad actors surrounding us have been manipulating, confusing and trying to create deception on social media platforms,” Dan Brahmy, CEO of the Israeli social media threat intelligence firm Cyabra, said Thursday in a video posted to LinkedIn. “If you are not sure of the trustworthiness [of content] … do not share,” he said.

‘Upticks in Islamophobic and antisemitic narratives’

Graham Brookie, senior director of the Digital Forensic Research Lab at the Atlantic Council in Washington, DC, told CNN his team has witnessed a similar phenomenon. The trend includes a wave of first-party terrorist propaganda, content depicting graphic violence, misleading and outright false claims, and hate speech – particularly “upticks in specific and general Islamophobic and antisemitic narratives.”

Much of the most extreme content, he said, has been circulating on Telegram, the messaging app with few content moderation controls and a format that facilitates quick and efficient distribution of propaganda or graphic material to a large, dedicated audience. But in much the same way that TikTok videos are frequently copied and rebroadcast on other platforms, content shared on Telegram and other more fringe sites can easily find a pipeline onto mainstream social media or draw in curious users from major sites. (Telegram didn’t respond to a request for comment.)

Schools in Israel, the United Kingdom and the United States this week urged parents to delete their children’s social media apps over concerns that Hamas will broadcast or disseminate disturbing videos of hostages who have been seized in recent days. Photos of dead or bloodied bodies, including those of children, have already spread across Facebook, Instagram, TikTok and X this week.

And tech watchdog group Campaign for Accountability on Thursday released a report identifying several accounts on X sharing apparent propaganda videos with Hamas iconography or linking to official Hamas websites. Earlier in the week, X faced criticism for videos unrelated to the war being presented as on-the-ground footage and for a post from owner Elon Musk directing users to follow accounts that previously shared misinformation (Musk’s post was later deleted, and the videos were labeled using X’s “community notes” feature.)

Some platforms are in a better position to combat these threats than others. Widespread layoffs across the tech industry, including at some social media companies’ ethics and safety teams, risk leaving the platforms less prepared at a critical moment, misinformation experts say. Much of the content related to the war is also spreading in Arabic and Hebrew, testing the platforms’ capacity to moderate non-English content, where enforcement has historically been less robust than in English-language content.

Sharing stuff that you’re not sure about is not helping people, it’s actually really harming them and it contributes to an overall sense that no one can trust what they’re seeing.”

Imran Ahmed, CEO of the Center for Countering Digital Hate

“Of course, platforms have improved over the years. Communication & info sharing mechanisms exist that did not in years past. But they have also never been tested like this,” Brian Fishman, the co-founder of trust and safety platform Cinder who formerly led Facebook’s counterterrorism efforts, said Wednesday in a post on Threads. “Platforms that kept strong teams in place will be pushed to the limit; platforms that did not will be pushed past it.”

Linda Yaccarino, the CEO of X, said in a letter Wednesday to the European Commission that the platform has “identified and removed hundreds of Hamas-related accounts” and is working with several third-party groups to prevent terrorist content from spreading. “We’ve diligently taken proactive actions to remove content that violates our policies, including: violent speech, manipulated media and graphic media,” she said. The European Commission on Thursday formally opened an investigation into X following its earlier warning about disinformation and illegal content linked to the war.

Meta spokesperson Andy Stone said that since Hamas’ initial attacks, the company has established “a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation. Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation. We’ll continue this work as this conflict unfolds.”

YouTube, for its part, says its teams have removed thousands of videos since the attack began, and continues to monitor for hate speech, extremism, graphic imagery and other content that violates its policies. The platform is also surfacing almost entirely videos from mainstream news organizations in searches related to the war.

Snapchat told CNN that its misinformation team is closely watching content coming out of the region, making sure it is within the platform’s community guidelines, which prohibits misinformation, hate speech, terrorism, graphic violence and extremism.

TikTok did not respond to a request for comment on this story.

Large tech platforms are now subject to content-related regulation under a new EU law called the Digital Services Act, which requires them to prevent the spread of mis- and disinformation, address rabbit holes of algorithmically recommended content and avoid possible harms to user mental health. But in such a contentious moment, platforms that take too heavy a hand in moderation could risk backlash and accusations of bias from users.

Platforms’ algorithms and business models — which generally rely on the promotion of content most likely to garner significant engagement — can aid bad actors who design content to capitalize on that structure, Ahmed said. Other product choices, such as X’s moves to allow any user to pay for a subscription for a blue “verification” checkmark that grants an algorithmic boost to post visibility, and to remove the headlines from links to news articles, can further manipulate how users perceive a news event.

“It’s time to break the emergency glass,” Ahmed said, calling on platforms to “switch off the engagement-driven algorithms.” He added: “Disinformation factories are going to cause geopolitical instability and put Jews and Muslims at harm in the coming weeks.”

Even as social media companies work to hide the absolute worst content from their users — whether out of a commitment to regulation, advertisers’ brand safety concerns, or their own editorial judgments — users’ continued appetite for gritty, close-up dispatches from Israelis and Palestinians on the ground is forcing platforms to walk a fine line.

“Platforms are caught in this demand dynamic where users want the latest and the most granular, or the most ‘real’ content or information about events, including terrorist attacks,” Brookie said.

The dynamic simultaneously highlights the business models of social media and the role the companies play in carefully calibrating their users’ experiences. The very algorithms that are widely criticized elsewhere for serving up the most outrageous, polarizing and inflammatory content are now the same ones that, in this situation, appear to be giving users exactly what they want.

But closeness to a situation is not the same thing as authenticity or objectivity, Ahmed and Brookie said, and the wave of misinformation flooding social media right now underscores the dangers of conflating them.

Despite giving the impression of reality and truthfulness, Brookie said, individual stories and combat footage conveyed through social media often lack the broader perspective and context that journalists, research organizations and even social media moderation teams apply to a situation to help achieve a fuller understanding of it.

“It’s my opinion that users can interact with the world as it is — and understand the latest, most accurate information from any given event — without having to wade through, on an individual basis, all of the worst possible content about that event,” Brookie said.

Potentially exacerbating the messy information ecosystem is a culture on social media platforms that often encourages users to bear witness to and share information about the crisis as a way of signaling their personal stance, whether or not they are deeply informed. That can lead even well-intentioned users to unwittingly share misleading information or highly emotional content created with the intention of collecting views or monetizing highly engaging content.

“Be very cautious about sharing in the middle of a major world event,” Ahmed said. “There are people trying to get you to share bullsh*t, lies, which are designed to inculcate you to hate or to misinform you. And so sharing stuff that you’re not sure about is not helping people, it’s actually really harming them and it contributes to an overall sense that no one can trust what they’re seeing.”

The post The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business appeared first on The Telegraph News Today.



This post first appeared on The Telegraph News Today, please read the originial post: here

Share the post

The Israel-Hamas war reveals how social media sells you the illusion of reality | CNN Business

×

Subscribe to The Telegraph News Today

Get updates delivered right to your inbox!

Thank you for your subscription

×