How Big Tech is fueling — and monetizing — false narratives about Israel and Palestine

Ellery Roberts Biddle

 

THE FOG OF DIGITAL DISINFORMATION

I have few words for the atrocities carried out by Hamas in Israel since October 7, and the horrors that are now unfolding in Gaza.

I have a few more for a certain class of social media users at this moment. The violence in Israel and Palestine has triggered what feels like a never-ending stream of pseudo-reporting on the conflict: allegations, rumors and straight up falsehoods about what is happening are emerging at breakneck speed. I’m not talking about posts from people who are actually on the ground and may be saying or reporting things that are not verified. That’s the real fog of war. Instead, I’m talking about posts from people who jump into the fray not because they have something urgent to report or say, but just because they can.

Social media has given many of us the illusion of total access to a conflict situation, a play-by-play in real time. In the past, this was enlightening — or at least it felt that way. During the Gaza War in 2014, firsthand civilian accounts were something you could readily find on what was then called Twitter, if you knew where to look. I remember reading one journalist’s tweets about her desperate attempt to flee Gaza at the Rafah border crossing, amid heavy shelling by Israeli forces — her story stuck with me for years, returning to my mind whenever Gaza came up. These kinds of narratives may still be out there, but they are almost impossible to find amidst the clutter. And this time around, those stories from Gaza could disappear from the web altogether, now that Israel has cut off electricity in the territory, and internet access there is in free fall.

This illusion of being close to a conflict, of being able to understand its contours from far away is no longer a product of carefully reported news and firsthand accounts on social media. Sure, there was garbage out there in 2014, but nearly a decade on, it feels as if there are just as many posts about war crimes that never happened as there are about actual atrocities that did. Our current internet, not to mention the state of artificial intelligence, makes it too easy to spread misinformation and lies. 

On October 9, tens of thousands of people shared reports that Israeli warplanes had bombed a historic church in Gaza, complete with photos that could convince anyone who hasn’t actually been to that site. The church itself posted on Facebook to discredit the reports and assure people that it remains untouched. Conflict footage from Syria, Afghanistan, and as far away as Guatemala has been “recycled” and presented as contemporary proof of brutalities committed by one side or the other. And of course there are the “videos” of airstrikes that turned out to be screengrabs from the video game “Arma 3.” Earnest fact-checking outfits and individual debunkers have rushed in to correct and inform, but it’s not clear how much difference this makes. People look to have their biases confirmed, and then scurry on through the digital chaos.

Some are even posting about the war for money. Speaking with Brooke Gladstone of “On The Media” on October 12, tech journalist Avi Asher-Shapiro pointed out that at the same time that X has dismissed most of its staff who handled violent and false content on the platform, it has created new incentives for this kind of behavior by enabling “creators” to profit from the material they post. So regardless if a post is true or not, the more likes, clicks and shares it gets, the more money its creator rakes in. TikTok offers incentives like this too.

While X appears to be the unofficial epicenter of this maelstrom, the disinformation deluge is happening on Meta’s platforms and TikTok too. All three companies are now on the hook for it in the European Union. EU Commissioner Thierry Breton issued a series of public letters to their CEOs, pointing out that under the bloc’s  Digital Services Act, they have to answer to regulatory authorities when they fail to stop the spread of content that could lead to actual harm.

The sheer volume of disinformation is hard to ignore. And it is an unconscionable distraction from the grave realities and horror of the war in Gaza.

In pursuit of mass scale, the world’s biggest social media companies designed their platforms to host limitless amounts of content. This is nearly impossible for them to oversee or manage, as the events in Israel and Palestine demonstrate. Yet from Myanmar and Sudan to Ukraine and the U.S., it has been proven again and again that violent material on social media can trigger acts of violence in real life, and that people are worse off when the algorithms get the run of the place. The companies have never fully gotten ahead of this issue. Instead, they have cobbled together a combination of technology and people to do the work of identifying the worst posts and scrubbing them from the web. 

The people — content moderators — typically review hundreds of posts each day, from videos of racist diatribes to beheadings and sexual abuse. They see the worst of the worst. If they didn’t, the platforms would be replete with this kind of material, and no one would want to use them. That is not a viable business model.

Despite the core need for robust content moderation, the Big Techs outsource most of it to third-party companies operating in countries where labor is cheap, like India or the Philippines. Or Kenya, where workers report being paid between $1 and $4 per hour and having limited access to counseling — a serious problem in a job like this.

This week, Coda Story reporter Erica Hellerstein brought us a deep dive on the lives of content moderation workers in Nairobi who over the past several months have come together to push back on what they say are exploitative labor practices. More than 180 content moderators are suing Meta for $1.6 billion over poor working conditions, low pay and what they allege was unfair dismissal after Meta switched contracting companies. Workers have also voted to form a new trade union that they hope will force big companies like Meta, and outsourcing firms like Sama, to change their ways. Erica writes:

“While it happens at a desk, mostly on a screen, the demands and conditions of this work are brutal. Current and former moderators I met in Nairobi in July told me this work has left them with post-traumatic stress disorder, depression, insomnia and thoughts of suicide.

These workers are reaching a breaking point. And now, Kenya has become ground zero in a battle over the future of content moderation in Africa and beyond. On one side are some of the most powerful and profitable tech companies on earth. On the other are young African content moderators who are stepping out from behind their screens and demanding that Big Tech companies reckon with the human toll of their enterprise.”

Odanga Madung, a Kenya-based journalist and a fellow at the Mozilla Foundation, believes the flurry of litigation and organizing represents a turning point in the country’s tech labor trajectory. In his words: “This is the tech industry’s sweatshop moment.” Don’t miss this terrific, if sobering read.

Images of violence are also at issue in Manipur, India, where a new government order has effectively banned people from posting videos and photos depicting acts of violence. This is serious because Manipur has been immersed in waves of public unrest and outbursts of ethnic violence since May. After photos of the slain bodies of two students who had gone missing in July surfaced and went viral on social media last month, authorities shut down the internet in an effort to stem unrest. In the words of the state government, the new order is intended as a “positive step towards bringing normalcy in the State.” But not everyone is buying this. On X yesterday, legal scholar Apar Gupta called the order an attempt to “contour” media narratives that would also “silence the voices of the residents of the state even beyond the internet shutdown.”

The U.N. is helping Saudi Arabia to “tech-wash” itself. This week, officials announced that the kingdom will host the world’s biggest global internet policy conference, the Internet Governance Forum (IGF), in 2024. This U.N.-sponsored gathering of governments, corporations and tech-focused NGOs might sound dull — I’ve been to a handful of them and can confirm that some of it is indeed a yawn. But some of it really matters. The IGF is a place where influential policymakers hash out ideas for how the global internet ought to work and how it can be a positive force in an open society — or how it can do the opposite. After China and Iran, I can think of few places that would be worse to do this than Saudi Arabia, a country that uses technology to exercise authoritarianism in more ways than we probably know.

From biometrics to surveillance — when people in power abuse technology, the rest of us suffer

More Coda Newsletters