Coda Story https://www.codastory.com/ stay on the story Thu, 30 Nov 2023 13:18:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 https://www.codastory.com/wp-content/uploads/2019/07/cropped-LogoWeb2021Transparent-1-32x32.png Coda Story https://www.codastory.com/ 32 32 For OpenAI’s CEO, the rules don’t apply https://www.codastory.com/newsletters/openai-ethics-board-altman/ Thu, 30 Nov 2023 13:18:11 +0000 https://www.codastory.com/?p=48563 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Palestinians face detention over “incitement” on social media, and Netanyahu welcomes Elon Musk despite his antisemitic posts on X.

The post For OpenAI’s CEO, the rules don’t apply appeared first on Coda Story.

]]>
Since my last newsletter, a shakeup at OpenAI somehow caused Sam Altman to be fired, hired by Microsoft, and then re-hired to his original post in less than a week’s time. Meet the new boss, literally the same as the old boss.

There are still a lot of unknowns about what went down behind closed doors, but the consensus is that OpenAI’s original board fired Altman because they thought he was building risky, potentially harmful tech in the pursuit of major profits. I’ve seen other media calling it a “failed coup”, which is the wrong way to understand what happened. Under the unique setup at OpenAI — which pledges to “build artificial general intelligence (AGI) that is safe and benefits all of humanity” — it is the board’s job to hold the CEO accountable not to investors or even to its employees, but rather to “all of humanity.” The board (alongside some current and former staff) felt Altman wasn’t holding up his end of the deal, so they did their job and showed him the door.

This was no coup. But it did ultimately fail. Even though Altman was part of the team that created this accountability structure, its rules apparently no longer applied to him. As soon as he left, his staff apparently threatened to quit en masse. Powerful people intervened and the old boss was back at the helm in time for Thanksgiving dinner. 

Now, OpenAI’s board is more pale, male and I dare say stale than it was two weeks ago. And Altman’s major detractors — Helen Toner, an AI safety researcher and strategy lead at Georgetown University’s Center for Security and Emerging Technology, and Tasha McCauley, a scientist at the RAND Corporation — have been shown the door. Both brought expertise that lent legitimacy to the company’s claims of prioritizing ethics and benefiting “all of humanity.” You know, women’s work. 

As esteemed AI researcher Margaret Mitchell wrote on X, “When men speak up abt AI&society, they gain tech opportunities. When non-men speak up, they **lose** them.” A leading scholar on bias and fairness in AI, Mitchell herself was famously fired by Google on the heels of Timnit Gebru, whose dismissal from Google was sparked by her critiques of the company’s approach to building AI. They are just a few of many women across the broader technology industry who have been fired or ushered out of powerful positions when they raised serious concerns about how technology might affect people’s lives.

I don’t know exactly what happened to the women who were once on OpenAI’s board, but I do know that when you have to do a ton of extra work simply to speak up, only to be shut down or shown the door, that’s a raw deal. 

On that note, who’s on Altman’s board now? Arguably, the biggest name is former U.S. Treasury Secretary Larry Summers, who used to be the president of Harvard University, but resigned amid fallout from a talk he gave in which he “explained” that women were underrepresented in the sciences because, on average, we just didn’t have the aptitude for the subject matter. Pick your favorite expletive and insert it here! Even though Summers did technically step down as president, the university still sent him off with an extra year’s salary. He has since continued to teach at Harvard, made millions working for hedge funds and become a special adviser at kingmaker venture capital firm Andreessen Horowitz. And now he gets to help decide the trajectory of what might be the most consequential AI firm in the world. That is a sweet deal.

The other new addition to the board is former Salesforce Co-CEO Bret Taylor, who was on the board of Twitter when it was still Twitter. There, Taylor played a major role in forcing Elon Musk to go through with his acquisition of the company, though Musk had tried to back out early in the process. This was good for Twitter’s investors and super terrible for everyone else, ranging from Twitter’s employees to the general public who had come to rely on the service as a place for news, critical debate and coordination in public emergencies. 

In Twitter’s case, there was no illusion about benefiting “all of humanity” — the board was told to act on investors’ behalf, and that’s what it did. It shows just how risky it is for us to depend on tech platforms run by profit-driven companies to serve as a quasi-public space. I worry that OpenAI will be next in line. And I don’t see this board doing anything to stop it.

GLOBAL NEWS

Thousands of Palestinians in the Israeli-occupied West Bank have been arrested since Oct. 7, some over things they’ve posted — or appear to have posted — online. One notable figure among them is Ahed Tamimi, a 22-year-old who has been a prominent advocate against the occupation since she was a teenager. Israeli authorities raided Tamimi’s home in early November and arrested her on accusations that she had written a post on Instagram inciting violence against Israeli settlers. The young woman’s family denied that Tamimi had posted the message, explaining that the post came from someone impersonating her, amid an online harassment campaign targeting the activist. Since her arrest, she has not been charged with any crime. On Tuesday, Tamimi’s name appeared on an official list of Palestinian detainees slated for release.

Israeli authorities have been quick to retaliate against anything that might look like antisemitic speech online — unless it comes from Elon Musk. The automotive and space-tech tycoon somehow managed to get a personal tour of Kfar Aza kibbutz — the scene of one of the massacres that Hamas militants committed on Oct. 7 — from no less than Prime Minister Benjamin Netanyahu himself this week. Just days prior, Musk had been loudly promoting an antisemitic conspiracy theory about anti-white hatred among Jewish people on X, describing it as “the actual truth.” Is Netanyahu not bothered by the growing pile of evidence that Musk is comfortable saying incredibly discriminatory things about Jewish people? As with Altman, the rules just don’t apply when you’re Elon Musk.

And there was a business angle for Musk’s visit to Israel. He has a habit of waltzing into cataclysmic crises and offering up his services. It’s always billed as an effort to help people, but there’s usually a thinly veiled ulterior geopolitical motive. While in Israel, he struck a deal that will allow humanitarian agencies in Gaza to use Starlink, his satellite-based internet service operated by SpaceX. Internet connectivity and phone service have been decimated by Israel’s war on Gaza, in which airstrikes have destroyed infrastructure and the fuel blockade has left telecom companies all but unable to operate. So Starlink could really help here. But in this case, it will only go so far. Israel’s communications ministry is on the other end of the agreement and has made it clear that access to the network will be strictly limited to aid agencies, arguing that a more flexible arrangement could allow for Hamas to take advantage. Journalists, local healthcare workers and just about everyone else will have to wait.

WHAT WE’RE READING

  • A study by Wired and the Integrity Institute’s Jeff Allen found that when the messaging service Telegram “restricts” channels that feature right-wing extremism and other forms of radicalized hate, they don’t actually disappear — they just become harder to “discover” for those who don’t subscribe. Vittoria Elliott has the story for Wired.
  • In her weekly Substack newsletter, crypto critic and Berkman Klein Center fellow Molly White offered a thoughtful breakdown of Silicon Valley’s “effective altruism” and “effective accelerationism” camps, which she writes “only give a thin philosophical veneer to the industry’s same old impulses.”

The post For OpenAI’s CEO, the rules don’t apply appeared first on Coda Story.

]]>
Russian propagandists turn their attention to Gaza https://www.codastory.com/newsletters/newsletter-russian-disinformation-antisemitism-propaganda/ Wed, 29 Nov 2023 14:01:51 +0000 https://www.codastory.com/?p=48534 Disinfo Matters looks beyond fake news to examine how the manipulation of narratives and rewriting of history are reshaping our world.

The post Russian propagandists turn their attention to Gaza appeared first on Coda Story.

]]>
Earlier this week, social media influencer and Russian state television’s favorite political commentator Jackson Hinkle celebrated reaching 2.2 million followers on X. He called on his vast audience to subscribe to his X Premium account for $3 to help him “CRUSH ZIONIST LIES!” The California-born Hinkle, only 24, has become a prominent social media presence solely due to his zealous pursuit of untruths.

Since Hamas’s Oct. 7 assault on Israeli civilians, Hinkle has devoted himself to posting anti-Israel content on social media, particularly X. Nearly all of his posts are blatant falsehoods and manipulations. Recently, for instance, he claimed that Israeli authorities staged a scene for Elon Musk, who recently visited the country, with unfired bullets in a crib. It was soon pointed out, however, that the bullets had indeed been fired. Nevertheless, the tweet is still up. Hinkle doesn’t bother with deleting posts or taking them back after errors have been exposed. He just continues to post more — and it works. His audience impressions over the last month alone run into the billions.  

Before the Hamas attacks, Hinkle spread Russian propaganda about the war in Ukraine. “Putin has God on his side in his quest to defeat NATO satanists,” Hinkle posted on X back in July. While Hinkle no longer posts about Ukraine, he is still serving Russia’s interests. On Nov. 27, for instance, Hinkle faithfully reported that “Hamas has released a Russian-Israeli citizen as a ‘thank you’ to President Putin for supporting Palestine!” More generally, though, the war in Gaza is an opportunity for Hinkle to do what the Kremlin most wants — focus online attention on the West’s seemingly unreflecting and hypocritical support for Israel. 

Hinkle has become a leading figure in that strange, social media-based netherworld of conspiracy theorists who have moved seamlessly from raging (or rather fomenting rage) about Covid vaccines, to raging about U.S. support for Ukraine, to now raging about the war in Gaza.

According to Pekka Kallioniemi, a propaganda researcher from Finland, these issues are “part of the same disinformation package.” In 2022, Kallioniemi began Vatnik Soup, a website and series of tweets in which he exposed, often sardonically, people and organizations he saw as “vatniks,” or useful idiots who would parrot Kremlin talking points online.  

Often, he told me, it is the same people who spread disinformation about Covid and vaccines and then about Russia’s invasion of Ukraine, who are now spreading pro-Hamas disinformation. “Their style is distinctive,” Kallioniemi said, because they have so successfully adopted the Russian propaganda technique of “high volume and multichannel disinformation.”

Posting on X, Kallioniemi recently noted the rapid proliferation on TikTok of videos purporting to show that Russian troops had been dispatched to “help” Palestinians defend themselves. “This is of course not true,” he wrote. But it didn’t matter. The larger narrative purpose was served — Russia is an ally and friend to those bullied by the West. “October 7,” he said, “was a big win for the Kremlin. It took the attention completely off the invasion of Ukraine. You began almost immediately to hear about an unwillingness to fund Ukraine’s defense indefinitely and about the need for peace talks.”

It is to Russia’s benefit, he added, that deliberate disinformation about Israel be allowed to infect the global conversation. “There has been a coordinated effort,” Kallioniemi told me, “to lower people’s trust in the authorities and to weaken democratic functioning.” A low-trust society, as the U.S. has gradually become, is “very vulnerable to disinformation and deep-state conspiracies,” he said. The pandemic proved to be particularly fertile ground for conspiracy theories, giving fresh impetus to a narrative about a globalist elite plotting to take over the world. Globalist narratives tend to be antisemitic, with Jewish people accused of being loyal to supranational entities that enhance control over, say, international banking or the media. In 2020, a study commissioned in the U.K. revealed that antisemitic content was rife in 79% of 27 leading anti-vaccine forums. 

The only long-term fix, Kallioniemi said, is education. “What Finland gets right,” he told me, “is that media literacy, critical thinking and checking sources are introduced very early. Even in pre-school, there is some understanding of the concept of disinformation and its impact.”        

Sometimes, though, education and critical thinking are not strong enough to withstand emotion and ideology. Jewish groups have long claimed that antisemitic disinformation is rampant on American university campuses. Some of these groups have just filed a lawsuit against the University of California, Berkeley for enabling “unchecked” antisemitism. If Russian propaganda about Gaza, spread by the likes of Hinkle, is finding an audience, it is because it cleverly exploits existing tensions.  

Dublin’s disinformation riots

Even broad educational achievements and moderate politics can fail to make societies immune to disinformation, as Ireland discovered last week. On Nov. 23, three young children and their teacher were stabbed in Dublin. Far-right groups called for young men to descend onto the scene of the crime, claiming that the stabbings had been committed by an illegal immigrant. The crowd quickly became violent, smashing storefronts and setting police vehicles and buses on fire. It took the police by surprise and hours elapsed before the riot was brought under control. The authorities quickly assigned blame to a far-right faction that they said had been “radicalized” online. It turned out that the attacker was an immigrant, an Algerian who had lived in Ireland for 20 years and was an Irish citizen. For what it’s worth, he was prevented from doing further damage by a much more recent immigrant, a Brazilian delivery driver who knocked him to the ground with his motorcycle helmet. 

If the rioting was shocking, disinformation experts argue that it could have been anticipated. Eileen Culloty, a professor in the communications department at Dublin City University, has written that “the COVID-19 pandemic marked a major turning point for disinformation in Ireland as various conspiracy theorists, anti-establishment actors, and, in particular, right-wing and far-right extremists mobilized online and offline.” Anger over lockdowns and vaccines curdled into anger over immigration, as Ireland took in a disproportionate number of refugees from Ukraine in addition to record numbers of asylum seekers. Contributing to the anger were a housing crisis, a cost-of-living crisis and the belief that local people were being cut off from benefits and forced to compete for scarce resources. Over the last year, there have been a number of protests. Inevitably, the social frustration has been amplified by deliberate and targeted disinformation on social media, including from X owner Elon Musk. As Irish Prime Minister Leo Varadkar called after the riots for new legislation to deal with hate speech, Musk weighed in. “Ironically,” he posted, “the Irish PM hates the Irish people.” Not the first time Musk has aligned himself with right-wing xenophobes. Varadkar’s father, incidentally, was an Indian-born doctor.

Rise of the trolls

And speaking of right-wing xenophobes: Dutch politician Geert Wilders is poised to form a coalition government in the Netherlands, after his party’s surprising success in snap elections earlier this month. If he can persuade anyone to work with him, that is. It is likely to prove challenging because in his public comments about Muslims in particular, Wilders can sound like an internet troll. He says his leadership style will be less confrontational, that he will be a prime minister for all Dutch people. Though he has yet to get the top job, his election success has already been celebrated by his far-right counterparts across Europe, including Hungary’s Viktor Orbán and France’s Marine Le Pen. He has also received acclaim from fellow Islamophobes in India. Last year, Wilders became a hero for Hindu nationalists when he defended Nupur Sharma, at the time a confident, abrasive spokesperson for the governing Bharatiya Janata Party. Sharma had appeared on a television debate show and made unprintably offensive remarks about the Prophet Muhammad and his third wife, a child bride, which provoked violent demonstrations in India and a diplomatic backlash from important trading partners such as Saudi Arabia, Kuwait and the UAE. The BJP ultimately suspended Sharma, but Wilders described her as a “hero who spoke nothing but the truth.” He added that “Hindus should be safe in India. It is their country, their homeland, it’s theirs! India is no Islamic nation.” It’s a sentiment that has won Wilders friends for life among Hindu nationalists in India, however rooted his words are in disinformation and conspiracy theory. 

WHAT WE’RE READING:

  • Foreign-born media owners are not unheard of in the U.K., including Rupert Murdoch and Evgeny Lebedev, the son of a former KGB spy. So why is it causing such consternation that a consortium led by former CNN boss Jeff Zucker and funded largely by the vice president of the UAE, Sheikh Mansour bin Zayed Al Nahyan, is seeking to buy conservative broadsheet The Telegraph? Surely there is something concerning when a senior member of the autocratic government of a country not known for encouraging the free press finances the takeover of a national newspaper in another country? The soft power benefits to the UAE seem obvious, but what will the consequences be for The Telegraph?
  • “Across Ukraine at least two dozen Pushkin statues have been removed from their pedestals since the war began,” writes Thomas de Waal in Englesberg Ideas. Given that the 19th-century poet, novelist and dramatist is considered to be Russia’s “national writer,” de Waal adds, “take down Pushkin’s statue and you are challenging Russia as a whole.” This excellent essay makes a compelling case for the need to emancipate rather than fetishize Russian literature.

The post Russian propagandists turn their attention to Gaza appeared first on Coda Story.

]]>
When deepfakes go nuclear https://www.codastory.com/authoritarian-tech/ai-nuclear-war/ Tue, 28 Nov 2023 14:01:33 +0000 https://www.codastory.com/?p=48430 Governments already use fake data to confuse their enemies. What if they start doing this in the nuclear realm?

The post When deepfakes go nuclear appeared first on Coda Story.

]]>
Two servicemen sit in an underground missile launch facility. Before them is a matrix of buttons and bulbs glowing red, white and green. Old-school screens with blocky, all-capped text beam beside them. Their job is to be ready, at any time, to launch a nuclear strike. Suddenly, an alarm sounds. The time has come for them to shoot their deadly weapon.

Why did we write this story?

AI-generated deepfakes could soon begin to affect military intelligence communications. In line with our focus on authoritarianism and technology, this story delves into the possible consequences that could emerge as AI makes its way into the nuclear arena.

With the correct codes input, the doors to the missile silo open, pointing a bomb at the sky. Sweat shines on their faces. For the missile to fly, both must turn their keys. But one of them balks. He picks up the phone to call their superiors.

That’s not the procedure, says his partner. “Screw the procedure,” the dissenter says. “I want somebody on the goddamn phone before I kill 20 million people.” 

Soon, the scene — which opens the 1983 techno-thriller “WarGames” — transitions to another set deep inside Cheyenne Mountain, a military outpost buried beneath thousands of feet of Colorado granite. It exists in real life and is dramatized in the movie. 

In “WarGames,” the main room inside Cheyenne Mountain hosts a wall of screens that show the red, green and blue outlines of continents and countries, and what’s happening in the skies above them. There is not, despite what the servicemen have been led to believe, a nuclear attack incoming: The alerts were part of a test sent out to missile commanders to see whether they would carry out orders. All in all, 22% failed to launch.

“Those men in the silos know what it means to turn the keys,” says an official inside Cheyenne Mountain. “And some of them are just not up to it.” But he has an idea for how to combat that “human response,” the impulse not to kill millions of people: “I think we ought to take the men out of the loop,” he says. 

From there, an artificially intelligent computer system enters the plotline and goes on to cause nearly two hours of potentially world-ending problems. 

Discourse about the plot of “WarGames” usually focuses on the scary idea that a computer nearly launches World War III by firing off nuclear weapons on its own. But the film illustrates another problem that has become more trenchant in the 40 years since it premiered: The computer displays fake data about what’s going on in the world. The human commanders believe it to be authentic and respond accordingly.

In the real world, countries — or rogue actors — could use fake data, inserted into genuine data streams, to confuse enemies and achieve their aims. How to deal with that possibility, along with other consequences of incorporating AI into the nuclear weapons sphere, could make the coming years on Earth more complicated.

The word “deepfake” didn’t exist when “WarGames” came out, but as real-life AI grows more powerful, it may become part of the chain of analysis and decision-making in the nuclear realm of tomorrow. The idea of synthesized, deceptive data is one AI issue that today’s atomic complex has to worry about.

You may have encountered the fruits of this technology in the form of Tom Cruise playing golf on TikTok, LinkedIn profiles for people who have never inhabited this world or, more seriously, a video of Ukrainian President Volodymyr Zelenskyy declaring the war in his country to be over. These are deepfakes — pictures or videos of things that never happened, but which can look astonishingly real. It becomes even more vexing when AI is used to create images that attempt to depict things that are indeed happening. Adobe recently caused a stir by selling AI-generated stock photos of violence in Gaza and Israel. The proliferation of this kind of material (alongside plenty of less convincing stuff) leads to an ever-present worry any image presented as fact might actually have been fabricated or altered. 

It may not matter much whether Tom Cruise was really out on the green, but the ability to see or prove what’s happening in wartime — whether an airstrike took place at a particular location or whether troops or supplies are really amassing at a given spot — can actually affect the outcomes on the ground. 

Similar kinds of deepfake-creating technologies could be used to whip up realistic-looking data — audio, video or images — of the sort that military and intelligence sensors collect and that artificially intelligent systems are already starting to analyze. It’s a concern for Sharon Weiner, a professor of international relations at American University. “You can have someone trying to hack your system not to make it stop working, but to insert unreliable data,” she explained.

James Johnson, author of the book “AI and the Bomb,” writes that when autonomous systems are used to process and interpret imagery for military purposes, “synthetic and realistic-looking data” can make it difficult to determine, for instance, when an attack might be taking place. People could use AI to gin up data designed to deceive systems like Project Maven, a U.S. Department of Defense program that aims to autonomously process images and video and draw meaning from them about what’s happening in the world.

AI’s role in the nuclear world isn’t yet clear. In the U.S., the White House recently issued an executive order about trustworthy AI, mandating in part that government agencies address the nuclear risks that AI systems bring up. But problem scenarios like some of those conjured by “WarGames” aren’t out of the realm of possibility. 

In the film, a teenage hacker taps into the military’s system and starts up a game he finds called “Global Thermonuclear War.” The computer displays the game data on the screens inside Cheyenne Mountain, as if it were coming from the ground. In the Rocky Mountain war room, a siren soon blares: It looks like Soviet missiles are incoming. Luckily, an official runs into the main room in a panic. “We’re not being attacked,” he yells. “It’s a simulation!””

In the real world, someone might instead try to cloak an attack with deceptive images that portray peace and quiet.

Researchers have already shown that the general idea behind this is possible: Scientists published a paper in 2021 on “deepfake geography,” or simulated satellite images. In that milieu, officials have worried about images that might show infrastructure in the wrong location or terrain that’s not true to life, messing with military plans. Los Alamos National Laboratory scientists, for instance, made satellite images that included vegetation that wasn’t real and showed evidence of drought where the water levels were fine, all for the purposes of research. You could theoretically do the same for something like troop or missile-launcher movement.

AI that creates fake data is not the only problem: AI could also be on the receiving end, tasked with analysis. That kind of automated interpretation is already ongoing in the intelligence world, although it’s unclear specifically how it will be incorporated into the nuclear sphere. For instance, AI on mobile platforms like drones could help process data in real time and “alert commanders of potentially suspicious or threatening situations such as military drills and suspicious troop or mobile missile launcher movements,” writes Johnson. That processing power could also help detect manipulation because of the ability to compare different datasets. 

But creating those sorts of capabilities can help bad actors do their fooling. “They can take the same techniques these AI researchers created, invert them to optimize deception,” said Edward Geist, an analyst at the RAND Corporation. For Geist, deception is a “trivial statistical prediction task.” But recognizing and countering that deception is where the going gets tough. It involves a “very difficult problem of reasoning under uncertainty,” he told me. Amid the generally high-stakes feel of global dynamics, and especially in conflict, countries can never be exactly sure what’s going on, who’s doing what, and what the consequences of any action may be.

There is also the potential for fakery in the form of data that’s real: Satellites may accurately display what they see, but what they see has been expressly designed to fool the automated analysis tools.

As an example, Geist pointed to Russia’s intercontinental ballistic missiles. When they are stationary, they’re covered in camo netting, making them hard to pick out in satellite images. When the missiles are on the move, special devices attached to the vehicles that carry them shoot lasers toward detection satellites, blinding them to the movement. At the same time, decoys are deployed — fake missiles dressed up as the real deal, to distract and thwart analysis. 

“The focus on using AI outstrips or outpaces the emphasis put on countermeasures,” said Weiner.

Given that both physical and AI-based deception could interfere with analysis, it may one day become hard for officials to trust any information — even the solid stuff. “The data that you’re seeing is perfectly fine. But you assume that your adversary would fake it,” said Weiner. “You then quickly get into the spiral where you can’t trust your own assessment of what you found. And so there’s no way out of that problem.” 

From there, it’s distrust all the way down. “The uncertainties about AI compound the uncertainties that are inherent in any crisis decision-making,” said Weiner. Similar situations have arisen in the media, where it can be difficult for readers to tell if a story about a given video — like an airstrike on a hospital in Gaza, for instance — is real or in the right context. Before long, even the real ones leave readers feeling dubious.

Ally Sheedy and Matthew Broderick in the 1983 MGM/UA movie “WarGames” circa 1983. Hulton Archive/Getty Images.

More than a century ago, Alfred von Schlieffen, a German war planner, envisioned the battlefield of the future: a person sitting at a desk with telephones splayed across it, ringing in information from afar. This idea of having a godlike overview of conflict — a fused vision of goings-on — predates both computers and AI, according to Geist.

Using computers to synthesize information in real-time goes back decades too. In the 1950s, for instance, the U.S. built the Continental Air Defense Command, which relied on massive machines (then known as computers) for awareness and response. But tests showed that a majority of Soviet bombers would have been able to slip through — often because they could fool the defense system with simple decoys. “It was the low-tech stuff that really stymied it,” said Geist. Some military and intelligence officials have concluded that next-level situational awareness will come with just a bit more technological advancement than they previously thought — although this has not historically proven to be the case. “This intuition that people have is like, ‘Oh, we’ll get all the sensors, we’ll buy a big enough computer and then we’ll know everything,’” he said. “This is never going to happen.”

This type of thinking seems to be percolating once again and might show up in attempts to integrate AI in the near future. But Geist’s research, which he details in his forthcoming book “Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare,” shows that the military will “be lucky to maintain the degree of situational awareness we have today” if they incorporate more AI into observation and analysis in the face of AI-enhanced deception. 

“One of the key aspects of intelligence is reasoning under uncertainty,” he said. “And a conflict is a particularly pernicious form of uncertainty.” An AI-based analysis, no matter how detailed, will only ever be an approximation — and in uncertain conditions there’s no approach that “is guaranteed to get an accurate enough result to be useful.” 

Creative Commons (CC BY 4.0) / NOIRLab/NSF/AURA.

In the movie, with the proclamation that the Soviet missiles are merely simulated, the crisis is temporarily averted. But the wargaming computer, unbeknownst to the authorities, is continuing to play. As it keeps making moves, it displays related information about the conflict on the big screens inside Cheyenne Mountain as if it were real and missiles were headed to the States. 

It is only when the machine’s inventor shows up that the authorities begin to think that maybe this could all be fake. “Those blips are not real missiles,” he says. “They’re phantoms.”

To rebut fake data, the inventor points to something indisputably real: The attack on the screens doesn’t make sense. Such a full-scale wipeout would immediately prompt the U.S. to total retaliation — meaning that the Soviet Union would be almost ensuring its own annihilation. 

Using his own judgment, the general calls off the U.S.’s retaliation. As he does so, the missiles onscreen hit the 2D continents, colliding with the map in circular flashes. But outside, in the real world, all is quiet. It was all a game. “Jesus H. Christ,” says an airman at one base over the comms system. “We’re still here.”

Similar nonsensical alerts have appeared on real-life screens. Once, in the U.S., alerts of incoming missiles came through due to a faulty computer chip. The system that housed the chip sent erroneous missile alerts on multiple occasions. Authorities had reason to suspect the data was likely false. But in two instances, they began to proceed as if the alerts were real. “Even though everyone seemed to realize that it’s an error, they still followed the procedure without seriously questioning what they were getting,” said Pavel Podvig, senior researcher at the United Nations Institute for Disarmament Research and a researcher at Princeton University. 

In Russia, meanwhile, operators did exercise independent thought in a similar scenario, when an erroneous preliminary launch command was sent. “Only one division command post actually went through the procedure and did what they were supposed to do,” he said. “All the rest said, ‘This has got to be an error,’” because it would have been a surprise attack not preceded by increasing tension, as expected. It goes to show, Podvig said, “people may or may not use their judgment.” 

You can imagine in the near future, Podvig continued, nuclear operators might see an AI-generated assessment saying circumstances were dire. In such a situation, there is a need “to instill a certain kind of common sense” he said, and make sure that people don’t just take whatever appears on a screen as gospel. “The basic assumptions about scenarios are important too,” he added. “Like, do you assume that the U.S. or Russia can just launch missiles out of the blue?”

People, for now, will likely continue to exercise judgment about attacks and responses — keeping, as the jargon goes, a “human in the loop.”

The idea of asking AI to make decisions about whether a country will launch nuclear missiles isn’t an appealing option, according to Geist, though it does appear in movies a lot. “Humans jealously guard these prerogatives for themselves,” Geist said. 

“It doesn’t seem like there’s much demand for a Skynet,” he said, referencing another movie, “Terminator,” where an artificial general superintelligence launches a nuclear strike against humanity.

Podvig, an expert in Russian nuclear goings-on, doesn’t see much desire for autonomous nuclear operations in that country. 

“There is a culture of skepticism about all this fancy technological stuff that is sent to the military,” he said. “They like their things kind of simple.” 

Geist agreed. While he admitted that Russia is not totally transparent about its nuclear command and control, he doesn’t see much interest in handing the reins to AI.

China, of course, is generally very interested in AI, and specifically in pursuing artificial general intelligence, a type of AI which can learn to perform intellectual tasks as well as or even better than humans can.

William Hannas, lead analyst at the Center for Security and Emerging Technology at Georgetown University, has used open-source scientific literature to trace developments and strategies in China’s AI arena. One big development is the founding of the Beijing Institute for General Artificial Intelligence, backed by the state and directed by former UCLA professor Song-Chun Zhu, who has received millions of dollars of funding from the Pentagon, including after his return to China. 

Hannas described how China has shown a national interest in “effecting a merger of human and artificial intelligence metaphorically, in the sense of increasing mutual dependence, and literally through brain-inspired AI algorithms and brain-computer interfaces.”

“A true physical merger of intelligence is when you’re actually lashed up with the computing resources to the point where it does really become indistinguishable,” he said. 

That’s relevant to defense discussions because, in China, there’s little separation between regular research and the military. “Technological power is military power,” he said. “The one becomes the other in a very, very short time.” Hannas, though, doesn’t know of any AI applications in China’s nuclear weapons design or delivery. Recently, U.S. President Joe Biden and Chinese President Xi Jinping met and made plans to discuss AI safety and risk, which could lead to an agreement about AI’s use in military and nuclear matters. Also, in August, regulations on generative AI developed by China’s Cyberspace Administration went into effect, making China a first mover in the global race to regulate AI.

It’s likely that the two countries would use AI to help with their vast streams of early-warning data. And just as AI can help with interpretation, countries can also use it to skew that interpretation, to deceive and obfuscate. All three tasks are age-old military tactics — now simply upgraded for a digital, unstable age.

Science fiction convinced us that a Skynet was both a likely option and closer on the horizon than it actually is, said Geist. AI will likely be used in much more banal ways. But the ideas that dominate “WarGames” and “Terminator” have endured for a long time. 

“The reason people keep telling this story is it’s a great premise,” said Geist. “But it’s also the case,” he added, “that there’s effectively no one who thinks of this as a great idea.” 

It’s probably so resonant because people tend to have a black-and-white understanding of innovation. “There’s a lot of people very convinced that technology is either going to save us or doom us,” said Nina Miller, who formerly worked at the Nuclear Threat Initiative and is currently a doctoral student at the Massachusetts Institute of Technology. The notion of an AI-induced doomsday scenario is alive and well in the popular imagination and also has made its mark in public-facing discussions about the AI industry. In May, dozens of tech CEOs signed an open letter declaring that “mitigating the risk of extinction from AI should be a global priority,” without saying much about what exactly that means. 

But even if AI does launch a nuclear weapon someday (or provide false information that leads to an atomic strike), humans still made the decisions that led us there. Humans created the AI systems and made choices about where to use them. 

And, besides, in the case of a hypothetical catastrophe, AI didn’t create the environment that led to a nuclear attack. “Surely the underlying political tension is the problem,” said Miller. And that is thanks to humans and their desire for dominance — or their motivation to deceive. 

Maybe the humans need to learn what the computer did at the end of “WarGames.” “The only winning move,” it concludes, “is not to play.”

The post When deepfakes go nuclear appeared first on Coda Story.

]]>
Stamping out hate speech or stifling free speech? https://www.codastory.com/newsletters/newsletter-germany-anti-semitism-free-speech/ Wed, 22 Nov 2023 12:45:32 +0000 https://www.codastory.com/?p=48455 Disinfo Matters looks beyond fake news to examine how the manipulation of narratives and rewriting of history are reshaping our world.

The post Stamping out hate speech or stifling free speech? appeared first on Coda Story.

]]>
Since the Hamas attacks on Oct. 7, German officials have made it clear that they support Israel whatever its response. With Germany’s desire to atone for its history, it is understandable that it feels a special duty towards Israel. But the German response has lacked nuance. It has arguably conflated sympathy for Palestine with support for Hamas. And by banning protests and condemning standard criticism of Israeli policies as antisemitic, German authorities have been accused of stifling free speech and expression. 

Nearly anyone can be silenced. On Nov. 9, the leading German daily Süddeutsche Zeitung denounced Indian art critic Ranjit Hoskote for signing an open letter in 2019 that described Zionism as a “racist ideology calling for a settler-colonial, apartheid state where non-Jews have unequal rights.” 

Hoskote was of interest to the German media because he sat on a search committee tasked with appointing the next art director for Documenta. Founded in 1955, Documenta is an internationally significant exhibition of contemporary art held every five years in the historic city of Kassel in central Germany. Within days of the newspaper’s article, Hoskote resigned. Documenta had precipitated his resignation by publicly declaring that his conduct in signing the letter four years ago “was not remotely acceptable” because of its “explicitly anti-Semitic content.”

Even before Hoskote resigned, the Israeli artist Bracha L. Ettinger stepped down from the search committee, citing her inability to continue to participate, describing the feeling of being “paralyzed under rockets, with the details of the massacre committed by Hamas against Israeli civilians, women, and babies, and of the kidnapping of children and babies and civilians, being streamed on my screen during our lunch and coffee breaks.” Though the allegations against Hoskote were public by the time Ettinger resigned, she said they had nothing to do with her decision.

In the wake of both resignations, the remaining four members of the search committee stepped down last week. “In the current circumstances,” they wrote, “we do not believe that there is a space in Germany for an open exchange of ideas.” Intellectual discourse in Germany, they argued, was falling prey to “over-simplification and prejudgments.” Hoskote defended himself in his own lengthy resignation letter. “I feel, strongly,” he said, “that I have been subjected to the proceedings of a kangaroo court.”

Documenta is particularly sensitive to any association with antisemitism because the 2022 edition, intended to foreground perspectives from the Global South, was mired in controversy before the exhibition even opened. An Indonesian collective included caricatures on a 60-foot-long painted banner that the Israeli embassy in Germany said was “Goebbels-style propaganda.” One of the figures on the banner was a soldier with a pig’s head. He wore a Star of David bandana around his neck and a helmet with the word “Mossad” on it, the name of Israel’s intelligence service. In addition, the curators of the exhibition had reportedly not invited any Jewish or Israeli artists to participate. Seven academics conducted an inquiry into events at Documenta, concluding that the exhibition was “an echo chamber for Israel-related antisemitism, and sometimes for pure antisemitism.” 

Keen to avoid a repeat of the 2022 scandal, Documenta urged Hoskote to distance himself from the letter. Instead, he chose to resign, claiming he was “being asked to accept a sweeping and untenable definition of anti-Semitism that conflates the Jewish people with the Israeli state.”

What happened at Documenta mirrors similarly anguished resignations around the world, including within the media. The question we seem unable to answer collectively is this: When does free speech curdle into unacceptable, even hateful speech?

The open letter that Hoskote signed in 2019 condemned an event being held at the Israeli consulate in Mumbai that celebrated the shared purpose of Zionism and Hindutva, the aggressive Hindu nationalist ideology embraced by India’s Prime Minister Narendra Modi. Hindutva is usually traced back to the early 1920s and the ideas of V.D. Savarkar, an admirer of Nazi Germany.  Just as Savarkar saw Germany as an example of how to deal with minorities, so his Hindutva descendants now see Israel.

Modi’s Hindu nationalist supporters identify closely with Israel, believing that they share a common enemy in Islamist terrorism. Israel, in their view, is a model for a future Hindu nation in which minorities, particularly Muslims, will have to know their place. This attitude has turned India’s traditional support for Palestine on its head. On social media, Hindutva supporters have been at the forefront of spreading Islamophobic and anti-Palestinian disinformation. Police in India have also been quick to arrest pro-Palestinian protestors, with as many as 200 students detained at a single protest in Delhi last month. In the Muslim-majority state of Kashmir, the government banned all expressions of pro-Palestinian protest.

Free speech, and the right to offer your opinion, however contested, did not apply to either Hoskote or the pro-Palestinian protesters arrested in Delhi. Instead, they were silenced by a narrative that brooks no departures from the ruling party line — whether in Germany or in India.  

Argentina’s AI election

With elections in the United States and India scheduled next year, Argentina’s recently concluded two-part presidential election offers a dire prognosis — expect artificial intelligence to feature prominently. Both candidates in the run-off, Javier Milei (the eventual winner) and Sergio Massa, used AI technology to generate campaign propaganda. Some of this material was satirical, mocking and stylized, but plenty of it was also misleading. The potential is there to fabricate entirely convincing deep fakes in which a person’s image and voice can be manipulated to say and do things they have never said or done.

Should all AI-generated images now carry a disclaimer? Meta, whose social media sites Facebook and Instagram are major platforms for digital advertising, says that from next year it will require advertisers to declare if and how they’ve used AI. Meta also said it would bar political campaigns and advertisers from using Meta’s generative AI technologies.

Massa’s communications team told The New York Times that their use of AI was strictly intended as entertainment and was clearly labeled. But is the point of AI-generated content not to persuade voters that particular images are real? I’m not sure. I think what draws political campaigns to AI is the volume and variety of messages that can be created. The ease with which images are proliferated at scale means that voters will be provided with a carefully constructed picture of candidates and their rivals — one in which fiction is impossible to separate from fact.

Russia jails yet more critics

With so much of the media’s focus on Gaza, the Kremlin can get on with the business of jailing its critics in relative obscurity. Too little attention was paid to the conviction of Sasha Skolichenko, an artist who was arrested last year for swapping out price tags in supermarkets with anti-war messages. She was arrested just a month after Russia passed a law criminalizing any public comment on the war that contradicted the official narrative. On Nov. 16, Skolichenko was sentenced to seven years in prison. “Everyone sees and knows that it’s not a terrorist you’re trying,” she told the judge. “You’re trying a pacifist.” Since the new law came into effect, nearly 20,000 Russians have been arrested for protesting against war in Ukraine.

Just yesterday, a court in Moscow issued a warrant for the arrest of Nadezhda Tolokonnikova, a founding member of Pussy Riot, the feminist punk rock protest group. She is reportedly not in Russia right now and at least temporarily safe from the long arm of the Kremlin. Tolokonnikova spent nearly two years in a Russian prison back in 2012 for breaking into a Moscow cathedral as part of an anti-Putin protest. The crime Tolokonnikova is now accused of committing seems practically invented for Pussy Riot — “insulting believers’ religious feelings.” 

WHAT WE’RE READING:

  • The Washington Post has been doing some terrific reporting out of India, shedding light on the contours of India’s increasingly undemocratic shape. In this recent dispatch, Gerry Shih and Anant Gupta ask industry insiders about Netflix and Amazon preferring to self-censor and pull out of politically and religiously sensitive projects rather than risk annoying the Hindu nationalist government and their fervent online troll army.
  • There’s one more resignation letter that merits mention this week, and that comes from Anne Boyer. The now-former poetry editor of The New York Times Magazine writes that she would rather resign than continue to work alongside “those who aim to acclimatize us to this unreasonable suffering.” “No more ghoulish euphemisms,” she writes about the coverage of Gaza. “No more warmongering lies.” 

The post Stamping out hate speech or stifling free speech? appeared first on Coda Story.

]]>
In India, Big Brother is watching https://www.codastory.com/authoritarian-tech/india-surveillance-modi-democratic-freedoms/ Tue, 21 Nov 2023 09:53:22 +0000 https://www.codastory.com/?p=48360 Apple warned Indian journalists and opposition politicians last month that their phones had likely been hacked by a state-sponsored attacker. Is this more evidence of democratic backsliding?

The post In India, Big Brother is watching appeared first on Coda Story.

]]>
Last month, journalist Anand Mangnale woke to find a disturbing notification from Apple on his mobile phone: “State-sponsored attackers may be targeting your iPhone.” He was one of at least a dozen journalists and Indian opposition politicians who said they had received the same message. “These attackers are likely targeting you individually because of who you are and what you do,” the warning read. “While it’s possible this is a false alarm, please take it seriously.”

Why This Story?

India, the world’s most populous democracy, goes to the polls next year and is likely to reelect Narendra Modi for a third consecutive five-year term. But evidence is mounting that India’s democratic freedoms are in regression.

Mangnale is an editor at the Organized Crime and Corruption Reporting Project, a global non-profit media outlet. In August, he and his co-authors Ravi Nair and NBR Arcadio published a detailed inquiry into labyrinthine offshore investment structures through which the Adani Group — an India-based multibillion-dollar conglomerate with interests in everything from ports, infrastructure and cement to green energy, cooking oil and apples — might have been manipulating its stock price. The documents were shared with both Financial Times and The Guardian, which also published lengthy stories alleging that the Adani Group appeared to be using funds from shell companies in Mauritius to break Indian stock market rules.

Mangnale’s phone was attacked with spyware just hours after reporters had submitted questions to the Adani Group in August for their investigation, according to an OCCRP press release. Mangnale hadn’t sent the questions, but as the regional editor, his name was easy to find on the OCCRP website.

OCCRP stated in a press release that Mangnale’s phone was attacked with spyware just hours after it submitted questions to the Adani Group in August for its report. Mangnale hadn’t sent the questions, but as the regional editor, his name was easy to find on the OCCRP website.

Gautam Adani, the Adani Group’s chairman and the second richest person in India, has been close to Indian Prime Minister Narendra Modi for decades. When Modi was campaigning in the 2014 general elections, which brought him to power with a sweeping majority, he used a jet and two helicopters owned by the Adani Group to crisscross the country. Modi’s perceived bond with Adani as well as with Mukesh Ambani, India’s richest man — all three come from the prosperous western Indian state of Gujarat — has for years given rise to accusations of crony capitalism and suggestions that India now has its own set of Russian-style oligarchs.

The Adani Group’s supposed influence on Modi is a major campaign issue for opposition parties, many of which are coming together in a coalition to take on the ruling Bharatiya Janata Party in the 2024 general election. According to Rahul Gandhi — leader of the opposition Congress party and scion of the Nehru-Gandhi dynasty, which has provided three Indian prime ministers — the Adani Group is so close to power it is practically synonymous with the government. He said Apple’s threat notifications showed that the government was hacking the phones of politicians who sought to expose Adani and his hold over Modi. 

Mahua Moitra, a prominent opposition politician and outspoken critic of Adani, reported that she had also received the warning from Apple to her phone. She posted on X: “Adani and PMO bullies — your fear makes me pity you.” PMO stands for the prime minister’s office.   

Mangnale, referring to the opposition’s allegations, told me that there was only circumstantial evidence to suggest that the Apple notification could be tied to the Indian government. As for his own phone, a forensic analysis commissioned by OCCRP did not indicate which government or government agency was behind the attack, nor did it surface any evidence that the Adani Group was involved. But the timing raised eyebrows, as the Modi government has been accused in the past of using spyware on political opponents, critical journalists, scholars and lawyers. 

In 2019, the messaging service WhatsApp, owned by Meta, filed a lawsuit in a U.S. federal court against the Israel-based NSO Group, developers of a spyware called Pegasus, in which it was revealed that the software had been used to target Indian journalists and activists. A year later, The Pegasus Project, an international journalistic investigation, reported that the phone numbers of at least 300 Indian individuals — Rahul Gandhi among them — had been slated for targeting with the eponymous weapons-grade spyware. And last year, The New York Times reported that Pegasus spyware was included in a $2 billion defense deal that Modi signed in 2017, on the first ever visit made by an Indian prime minister to Israel. In November 2021, Apple sued NSO too, arguing that in a “free society, it is unacceptable to weaponize powerful state-sponsored spyware against those who seek to make the world a better place.” 

What is happening to Mangnale is the most recent iteration of a script that has been playing out for the last nine years. India’s democratic regression is evident in its declining scores in a variety of international indices. In the latest World Press Freedom Index, compiled by Reporters Without Borders, India ranks 161 out of 180 countries, and its score has been declining sharply since 2017. According to RSF, “violence against journalists, the politically partisan media and the concentration of media ownership all demonstrate that press freedom is in crisis.”  

By May next year, India will hold general elections, in which Modi is expected to win a third consecutive five-year term as prime minister and further entrench a Hindu nationalist agenda. Since 2014, as India has become a strategic potential counterweight to runaway Chinese power and influence in the Indo-Pacific region, Modi has reveled in being increasingly visible on the global stage. Abroad, he has brandished India’s credentials as a pluralist democracy. The mounting criticism in the Western media of his authoritarian tendencies and Hindu chauvinism has seemingly had little effect on India’s diplomatic standing. Meanwhile at home, Modi has arguably been using — perhaps misusing — the full authority of the prime minister’s office to stifle opposition critics. 

Indian Prime Minister Narendra Modi and billionaire businessman Gautam Adani (left) have long had a mutually beneficial relationship that critics allege crosses the line into crony capitalism. Vijay Soneji/Mint via Getty Images.

The morning after Apple sent out its warning, there was an outpouring of anger on social media, with leading opposition figures accusing the government of spying. Apple, as a matter of course, says it is “unable to provide information about what causes us to issue threat notifications.” The logic is that such information “may help state-sponsored attackers adapt their behavior to evade detection in the future.” But the lack of information leaves a gap that is then filled by speculation and conspiracies. Apple’s circumspect message, containing within it the possibility that the threat notification might be false altogether, also gives governments plausible deniability.

Right on cue, Ashwini Vaishnaw, India’s minister of information and technology, managed in a single statement to claim that the government was concerned about Apple’s notification and would “get to the bottom of it” while also dismissing surveillance concerns as just bellyaching. “There are many compulsive critics in our country,” Vaishnaw said about the allegations from opposition politicians. “Their only job is to criticize the government.” Lawyer Apar Gupta, founder of the Internet Freedom Foundation, described Vaishnaw’s statements as an attempt to “trivialize or misdirect public attention.”

Finding that his phone had been attacked by spyware was not the only example of Mangnale being targeted after OCCRP published its investigation into the Adani Group’s possibly illegal stock manipulation. In October, the Gujarat police summoned Mangnale and his co-author Ravi Nair to the state capital Ahmedabad to question them about the OCCRP report. Neither journalist lives in the state, which made the police summons, based on a single complaint by an investor in Adani stocks, seem like intimidation. It took the intervention of India’s Supreme Court to grant both journalists temporary protection from arrest.

Before the Supreme Court, the well-known lawyer Indira Jaising had argued that the Gujarat police had no jurisdiction to arbitrarily summon Mangnale and Nair to the state without informing them in what capacity they were being questioned. It seemed, she told the court, like a “prelude to arrest” and thus a violation of their constitutional right to personal liberty. A week later, the Supreme Court made a similar ruling to protect two Financial Times correspondents based in India from arrest. The journalists, in Mumbai and Delhi, had not even written the article based on documents shared by the OCCRP, but were still summoned by police to Gujarat. On December 1, the police are expected to explain to the Supreme Court why they are seemingly so eager to question the reporters.

While the mainstream television news networks in India frequently and loudly debate news topics on air, there is little coverage of the pressure that the Indian government puts on individuals who try to hold the government to account. Ravish Kumar, an esteemed Hindi-language journalist, told me that few people in India were aware of the threat to journalists and opposition voices in Modi’s India. “When people hear allegations made by political figures such as Rahul Gandhi, they can be dismissed as politics rather than fact. There is no serious discussion of surveillance in the press,” he said. 

Kumar once had a substantial platform on NDTV, a respected news network that had built its reputation over decades. In March this year, the Adani Group completed a hostile takeover of NDTV, leading to a series of resignations by the network’s most recognizable anchors and editors, including Kumar. NDTV is now yet another of India’s television news networks owned by corporations that are either openly friendly to the Modi government or unwilling to jeopardize their other businesses by being duly critical. 

Nowadays, Kumar reports for his personal YouTube channel, albeit one with about 7.8 million subscribers. A documentary about his lonely fight to keep reporting from India both accurately and skeptically was screened in cinemas across the U.K. and U.S. in July. 

According to Kumar, journalists and critics are naturally fearful about the Indian government’s punitive measures because some have ended up in prison on the basis of dubious evidence found on their phones and laptops. Most notoriously, a group of reputed academics, writers and human rights activists were accused of inciting riots in 2018 and plotting to assassinate the prime minister. Independent analysts hired by The Washington Post reported that the electronic evidence in the case was likely planted. 

Some of this possibly planted evidence was found on the computer of Stan Swamy, an octogenarian Jesuit priest who was charged with crimes under India’s anti-terror law and died in 2021 as he awaited trial. Swamy suffered from Parkinson’s disease, which can make everyday actions like eating and drinking difficult. While in custody, he was treated so poorly by the authorities that he had to appeal for a month before he was given a straw to make it easier for him to drink.

The threat of arrest hangs like a Damoclean sword above the heads of journalists like Mangnale who dare to ask questions of power and investigate institutional corruption. Despite the interim stay on his arrest, Mangnale still faces further court proceedings and the possibility of interrogation by the Gujarat police. In the words of Drew Sullivan, OCCRP’s publisher: “The police hauling in reporters for vague reasons seems to represent state-sanctioned harassment of journalists and is a direct assault on freedom of expression in the world’s largest democracy.”

The post In India, Big Brother is watching appeared first on Coda Story.

]]>
Fleeing war? Need shelter? Personal data first, please https://www.codastory.com/newsletters/conflict-refugees-data-surveillance/ Thu, 16 Nov 2023 14:24:50 +0000 https://www.codastory.com/?p=48355 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Hikvision offers ethnic minority “alerts” in Chinese university dining halls, and surveillance hits an all-time high in the West Bank.

The post Fleeing war? Need shelter? Personal data first, please appeared first on Coda Story.

]]>
More people have been displaced by violence and natural disasters over the past decade than ever before in human history, and the numbers — that already exceed 100 million — keep climbing. Between ongoing conflict in the Democratic Republic of Congo, Pakistan’s mass expulsion of people of Afghan origin and Israel’s bombardment of Gaza, millions more people have been newly forced to leave their homes since October. 

When people become displaced en masse, organizations like the U.N., with its World Food Program and refugee agency, will often step in to help. But today, sometimes before they distribute food or medicine, they typically collect refugees’ data. Fingerprinting, iris scans and even earlobe measurements are now a common requirement for anyone seeking to meet their basic needs.

This week I caught up with Zara Rahman, a tech and social justice researcher who studies the drive across international humanitarian and intergovernmental organizations like the U.N. and the World Bank to digitize our identities.

“Of course, U.N. agencies are trying to figure out how much food and what resources we need,” Rahman told me. But “the amount of information that is being demanded and collected from people in comparison to what is actually needed in order to provide resources is just wildly different.” 

In “Machine Readable Me: The Hidden Ways Tech Shapes Our Identities,” her new book on the global push to digitize our lives, Rahman looks at the history of data collection by governments and international agencies and what happens when their motives change or data falls into the wrong hands. Nazi Germany is a top pre-digital case study here — she has a great passage about how members of the Dutch resistance bombed Amsterdam’s civil registry office during World War II to prevent Nazis from using the registry to identify and persecute Jews.

She then leaps forward to Afghanistan, where U.S. occupying forces deployed data collection systems that were later seized by the Taliban when they skated back into power in 2021. These databases gave Taliban leadership incredibly detailed information about the lives of people who worked for the U.S. government — to say nothing of women, whose lives and opportunities have been entirely rewritten by the return to Taliban rule. We may never know the extent of the damage incurred here.

Data collection and identity systems are also used, or could potentially be used, to persecute and victimize people whose nationality is contested, like many of those being expelled right now from Pakistan. Rahman emphasized that what happens to these people may depend on who the state perceives them to be and whether they are seen as people who “should return to Pakistan at some point.” 

Rohingya Muslims, she reminded me, were famously denied citizenship and the documentation to match by the Myanmar government for generations. Instead, in the eyes of the state, they were “Bengalis” — an erroneous suggestion that they had come from Bangladesh. In 2017, hundreds of thousands of Rohingya people fled the Burmese military’s ethnic cleansing operations in western Myanmar and landed in Bangladesh, where the government furnished them with IDs saying that they were from Myanmar, thereby barring them from putting down roots in Bangladesh. In effect, both countries leveraged their identity systems to render the Rohingya people stateless and wash their hands of this population. 

What recourse do people have in such circumstances? For the very rich, these rules don’t apply. People with deep pockets rarely find themselves in true refugee situations, and some even purchase their way to citizenship — in her book, Rahman cites a figure from Bloomberg, which reported that “investor-citizens spent $2 billion buying passports in 2014.” But most of the tens of millions of people affected by these systems are struggling to survive — the financial and political costs of litigating or challenging authorities are totally out of bounds. And with biometric data a part of the package, the option of slipping through the system or somehow establishing yourself informally is too. Your eyes are your eyes and can be used to identify you forever.

GLOBAL NEWS

Facial recognition tech is a key tool in China’s campaign against ethnic Uyghurs. This isn’t news, but the particular ways in which Chinese authorities deploy such tools to persecute Uyghur people, most of whom are Muslim, continue to horrify me. It came to light recently that Hikvision, the popular surveillance camera maker that offers facial recognition software, won a state contract in 2022 to develop a system that conducts “Assisted Analysis Of Ethnic Minority Students.” It’s worth noting that Hikvision in the past has boasted of its cameras’ abilities to spot “Uyghur” facial features, a brag that helped the technology get blacklisted in the U.S. But while you can’t buy it here, it’s pretty common across Asia, Africa and even in the U.K. The recently leaked tenders and contracts, published on IPVM, show that the company developed tools that alerted Chinese authorities about university students who were “suspected of fasting” during Ramadan, as well as monitored travel plans, observation of holidays and even things like what books ethnic minority students checked out at the library. Paging George Orwell.

Israel is also doubling down on facial recognition and other hardcore surveillance tech, after its world-renowned intelligence system failed to help prevent the deadly attacks of October 7. In the occupied West Bank, Palestinians report their daily movements are being watched and scrutinized like never before. That’s saying a lot in places like the city of Hebron, which has been dotted with military checkpoints, watchtowers and CCTV cameras — some of which are supplied by Hikvision — for years now. In a dispatch this week for Wired, Tom Bennett wrote about the digital profiling and facial recognition surveillance database known as Wolf Pack that allows the military officers to pull up complex profiles on all Palestinians in the territory, simply by scanning their faces. In a May 2023 report, Amnesty International asserted that whenever a Palestinian person goes through a checkpoint where the system is in use, “their face is scanned, without their knowledge or consent.”

Some of the world’s most powerful tech companies are either headquartered or present in Israel. So the country’s use of technology to surveil Palestinians and identify targets in Gaza is a burning issue right now, including for engineers and tech ethics specialists around the world. There’s an open letter going around, signed by some of the biggest names in the responsible artificial intelligence community, that condemns the violence and the use of “AI-driven technologies for warmaking,” the aim of which, they write, is to “make the loss of human life more efficient.” The letter covers a lot of ground, including surveillance systems I mentioned above and Project Nimbus, the $1.2 billion deal under which Amazon and Google provide cloud computing services to the Israeli government and military. Engineers from both companies have been advocating for their employers to cancel that contract since it first became public in 2021. 

The letter also notes the growing pile of evidence of anti-Palestinian bias on Meta’s platforms. Two recent stand-out examples are Instagram’s threat to suspend the account of acclaimed journalist Ahmed Shihab-Eldin over a video he posted that showed Israeli soldiers abusing Palestinian detainees, and the shadowbanning of digital rights researcher Mona Shtaya after she posted a link to an essay she wrote for the Middle East Institute on the very same issue. Coincidence? Looking at Meta’s track record, I very much doubt it.

WHAT WE’RE READING

  • I’ve written a few times about how police in the U.S. have misidentified suspects in criminal cases based on faulty intel from facial recognition software. Eyal Press has a piece on the issue for The New Yorker this week that asks if the technology is pushing aside older, more established methods of investigation or even leading police to ignore contradictory evidence.
  • Peter Thiel is taking a break from democracy — and he won’t be bankrolling Trump’s 2024 presidential campaign. Read all about it in Barton Gellman’s illuminating profile of the industry titan for The Atlantic.

The post Fleeing war? Need shelter? Personal data first, please appeared first on Coda Story.

]]>
Gaza’s journalists, caught between bombs and disinformation https://www.codastory.com/newsletters/newsletter-israel-disinformation-gaza/ Wed, 15 Nov 2023 13:07:32 +0000 https://www.codastory.com/?p=48332 Disinfo Matters looks beyond fake news to examine how the manipulation of narratives and rewriting of history are reshaping our world.

The post Gaza’s journalists, caught between bombs and disinformation appeared first on Coda Story.

]]>
More than 11,000 people have been killed in about six weeks, as Israel bombs the Gaza Strip in its bid to wipe out Hamas. The numbers are beginning to have a numbing effect. And that may be precisely the point. “We Are Not Numbers” is a website that publishes stories largely written by young people who live in Gaza. The numbers, the writers say, “don’t convey the daily personal struggles and triumphs, the tears and the laughter, and the aspirations that are so universal that if it weren’t for the context they would immediately resonate with virtually everyone.”

Inevitably now, these stories are about death and displacement. Last month, Mahmoud al-Naouq, the 25-year-old brother of “We Are Not Numbers” co-founder Ahmed al-Naouq, was killed, along with several other members of his family. Mahmoud had just received a scholarship to go to graduate school in Australia. Al-Naouq is hardly the only local journalist in mourning. A correspondent for Al Jazeera was on the air when he heard that his wife, 7-year-old daughter, teenage son and baby grandson had all been killed in an Israeli airstrike. He is filmed, in tears, standing over his dead son’s body. “I suppose I should thank God,” he says, “that at least some of my family survived.”

Among the thousands of people who have been killed in Gaza are dozens of journalists. The Committee to Protect Journalists says 42 journalists and media workers have been killed (as of Tuesday, November 14) during this conflict, 37 of them Palestinian. The CPJ says these numbers are unprecedented since it began keeping such records in 1992. Just as Israel is paying little heed to civilian casualties in Gaza in the course of its stated mission to obliterate Hamas, it refuses to take responsibility for killing journalists. The Israeli Defense Forces told major news wires including Reuters and Agence France-Presse that it could not guarantee the safety of their employees in Gaza. In fact, not only are authorities in Israel not guaranteeing the safety of journalists, they have been conflating journalists with terrorists.

Israeli government officials have openly claimed that Gazan journalists are siding with Hamas and are therefore legitimate targets. On X, Benny Gantz, a former defense minister of Israel and currently part of the country’s wartime cabinet, posted that journalists who knew “about the massacre, and still chose to stand as idle bystanders while children were slaughtered — are no different than terrorists and should be treated as such.” 

Based on scanty and circumstantial evidence, a pro-Israel media watchdog accused photojournalists who have worked for the wire services, as well as The New York Times and CNN among others, of having prior knowledge of the October 7 attacks. One of these journalists, Hassan Eslaiah, has found himself singled out. In 2020, he posted a photograph of him being kissed by a Hamas leader believed to have masterminded the October 7 attacks. Eslaiah also took some of the earliest photographs of the Hamas attacks. Both the Associated Press and CNN had used Eslaiah’s photographs but now said they would no longer work with him. “While we have not at this time found reason to doubt the journalistic accuracy of the work he has done for us, we have decided to suspend all ties with him,” said CNN in a statement. 

What evidence Israel has to denigrate and doubt the integrity of the journalists they threaten may be unclear. But what is clear is that it will be left to the consciences of individual journalists to stick up for their Gazan colleagues. 

About 900 journalists have signed an open letter dated November 9 that declares their support for journalists in Gaza. “Without them, many of the horrors on the ground would remain invisible.” The letter writers, “a group of U.S.-based reporters at both local and national newsrooms,” note that “taken with a decades-long pattern of lethally targeting journalists, Israel’s actions show wide scale suppression of speech.” This suppression is abetted, the writers contend, by “Western newsrooms accountable for dehumanizing rhetoric that has served to justify ethnic cleansing of Palestinians.”  

In 1982, Palestinian American intellectual Edward Said wrote about “those who go on sanctimoniously about terrorism and are silent when it comes to Israel’s almost apocalyptic state terrorism.” Over 40 years later, almost nothing has changed when it comes to the mainstream Western media’s coverage of the conflict. As Israel flattens Gaza, the Western media ties itself in semantic knots — insisting, for instance, on using phrases such as “Hamas-run health ministry” to shroud casualty figures in doubt or worse, to do Israel’s job for it by associating all residents of Gaza with terrorism. It is, as the letter writers put it, “journalistic malpractice” to fail or refuse to tell the whole story. The people who are best able to tell the story and whose voices are so rarely prioritized are in Gaza, silenced by both Israeli brutality and Western media outlets still unable to shake their biases and narrative tropes.

Ukraine’s forgotten children

Russia’s notorious children’s rights commissioner, Maria Lvova-Belova, has been inviting foreign journalists from the United States, Finland and Japan to visit camps in which thousands — possibly hundreds of thousands — of abducted Ukrainian children are held. In March, the International Criminal Court put a warrant out for Lvova-Belova’s arrest, alongside that of her boss, Vladimir Putin, for unlawful deportation of children, a war crime. But this has yet to stop them. And now they’re putting the children on display. A documentary that recently aired on British television showed how the children were often duped into thinking they were being taken to a holiday camp and then subjected to “re-education,” including being told to speak only Russian and singing patriotic Russian songs in front of inspectors. 

Attempts to portray life at these camps as idyllic will strike most viewers as obvious propaganda, but Lvova-Belova is not giving up trying to persuade us otherwise. In social media posts — which my colleague Ivan Makridin translated for me — she recently described the meeting between the journalists and the children as a “mentorship” opportunity. The young Ukrainians apparently asked their foreign visitors what they thought of the camp. “I like it very much,” Lvova-Belova quotes a Japanese journalist as saying. “I think your faces are all beautiful, cheerful and happy.”

When you know you are meeting children who are being held captive and who are not speaking freely while the press officer is in the room, why go through with it? While I understand the journalistic impulse of wanting to see the camps and meet the children, I find the ethics of agreeing to go on a guided tour dubious. Especially when Lvova-Belova is going to twist your presence into “proof” that these children are somehow better off at a camp in Russia than with their families. Back in May, Coda’s editor-in-chief Natalia Antelava criticized Vice for agreeing to interview Lvova-Belova and failing to hold her to account. Instead, Antelava wrote, Vice gave Lvova-Belova and the Kremlin “a platform to spin and legitimize their narrative.” This latest invitation to foreign journalists looks like more of the same.

India’s creepy deep fakes

Last week, a Bollywood film star’s face was attached to another woman’s body in a salacious deep fake that went viral. It caused outrage, at least in part because of how convincing the video was and how easy it now seemed to use generative artificial intelligence to spread mischievous misinformation. Celebrities, the government and national newspaper editors made public calls for draft legislation that would punish the creators of fake videos. But that’s easier said than done — defining “fakeness” under the law is harder than it sounds — and any law attempting to rein in this kind of material could pave the way for government overreach. It could also add fuel to the Indian government’s case against end-to-end encryption, since this kind of technology could help mask the identities of people creating deep fakes. If this should come to pass, the pitfalls for privacy and opportunities for mass surveillance will be significant, and could have much more profound effect on many more millions of people than a handful of salacious videos have had.

WHAT WE’RE READING:

  • I enjoyed this lengthy meander through the origins of machine-generated text by the academic Richard Hughes Gibson. In the 1960s, Gibson reveals, the author Italo Calvino was already prepared to concede that the “literature machine” might match or outdo the human writer. What the machine couldn’t do, though, was replicate the reader and the particularity the reader brings to the text — the sudden associations and minor epiphanies. “Calvino,” writes Gibson, “anticipated the urgent question of our time: Who will attend to the machines’ writing?”
  • The International Olympic Committee is hopping mad. A slick four-part Netflix documentary, narrated by Tom Cruise no less, reveals that corruption within the IOC is destroying the Olympics. Except, the “documentary” is fake: Cruise’s voiceover is AI generated and no such program can be found on Netflix. The nine-minute episodes were uploaded onto a largely Russian language-dominated Telegram channel, which unlike YouTube has not taken the videos down. Is it a coincidence that they started emerging after the IOC banned the Russian Olympic Committee for its decision last month to recognize regional federations in occupied regions such as Kherson and Donetsk? This inquiring mind wants to know.

The post Gaza’s journalists, caught between bombs and disinformation appeared first on Coda Story.

]]>
Why are climate skeptics speaking out about the Uyghur genocide? https://www.codastory.com/waronscience/uyghur-genocide-solar-energy/ Tue, 14 Nov 2023 11:12:02 +0000 https://www.codastory.com/?p=48055 For conservatives in the U.S., China’s assault on ethnic Uyghurs has become a near-perfect reason not to invest in solar energy

The post Why are climate skeptics speaking out about the Uyghur genocide? appeared first on Coda Story.

]]>
Last month, California’s Gavin Newsom made headlines across the world when he sat down with Chinese President Xi Jinping in Beijing. Flashing a smile for the cameras and going in for a chummy handshake, the Democratic governor’s message was clear. “Divorce is not an option,” he later told reporters of the rocky relationship between the United States and its closest economic rival. “The only way we can solve our climate crisis is to continue our long standing cooperation with China.” Reducing dependence on fossil fuels, Newsom said, is among the most urgent items on the shared agenda of the two countries.

Why did we write this story?

China’s control of the solar industry causes tension between respecting a people’s fundamental rights and addressing the crisis of climate change. This story explores how partisan politics, when injected into the mix, drags the issue into ethical quicksand.

Together, the U.S. and China are responsible for more than a quarter of greenhouse gas emissions worldwide, and both countries need to take action to reduce their dependence on fossil fuels, as Newsom argued on his trip. One technology that most scientists agree will make a meaningful difference for the climate is solar panels. U.S. appetite for photovoltaics is growing, and although it’s the world’s biggest polluter, China happens to dominate the global supply chain for solar panels: Chinese companies manufacture panels more efficiently and at greater scale than suppliers in other countries, and they sell them at rock-bottom prices.

But there’s a big problem at the start of the supply chain. Part of what makes China’s solar industry so prolific is that it is rooted in China’s Xinjiang province, home to a vast system of forced labor in detention camps and prisons where an estimated 1-2 million ethnic Uyghurs and members of other ethnic minority groups are held against their will. There is strong evidence that Uyghurs in Xinjiang live in conditions akin to slavery. Key components of solar energy, in other words, are being brought to much of the world by the victims of what U.S. authorities call an ongoing genocide.

None of this material officially lands in the U.S., owing to the 2022 Uyghur Forced Labor Prevention Act, a federal regulation that restricts imports of any goods from Xinjiang — the only law of its kind among the world’s biggest economies. Still, the topic of solar panel production — a critical weapon in today’s arsenal of climate action — is intrinsically tangled up with Uyghur forced labor. Yet Newsom made no mention of the Uyghurs on his recent China tour, a silence that has become all too common among left-wing and climate advocacy groups. At the same time, the Uyghur plight has captured a certain element of the right-wing political zeitgeist in the U.S. for reasons that are more complicated than one might expect: The Uyghur genocide is a near-perfect reason not to invest in solar energy, a prime talking point for right-wing media personalities and Republican lawmakers known for promoting climate skepticism and disinformation.

Uyghur forced labor is also unlikely to have come up when U.S. climate envoy John Kerry met with his Chinese counterpart Xie Zhenhua in California last week. Their talks, Kerry later told delegates at a conference in Singapore, led “to some very solid understandings and agreements” in preparation for the upcoming COP28, the United Nations climate summit that begins in Dubai on November 30. The timing of the talks suggests that the U.S. acknowledges that Chinese dominance of the solar industry is unlikely to be challenged anytime soon. In the first half of 2023, Chinese exports of solar panels grew by 34% worldwide, and China already controls 80% of the global market share. 

Climate scientists say that we have perhaps only a few years left to reduce emissions and avoid a runaway greenhouse gas scenario, which could lead to rapid sea-level rise, mass desertification and potentially billions of climate refugees. Extreme weather events fueled by the changing climate are becoming more frequent and their impacts more devastating. Canada saw 18 million hectares of forest burn this year, emitting a haze that had people from Maine to Virginia donning KN95s just to walk outside. Last year in Pakistan, historic floods covered one-third of the country.  

“The lack of progress on emissions reduction means that we can be ever more certain that the window for keeping warming to safe levels is rapidly closing,” said Robin Lamboll, a climate scientist at Imperial College London, in a recent press statement.

There is an urgent need to reduce emissions from fossil fuels, and solar power is seen as an essential part of how to do this — it’s affordable and can be placed nearly anywhere. Without a rapid increase in the amount of solar installations around the world, limiting climate change might be impossible.

But right now, a huge proportion of solar installations are a product of Uyghur forced labor. A 2021 report from Sheffield Hallam University in the U.K. highlighted the solar industry’s dependency on materials from Xinjiang, estimating that 45% of the world’s solar-grade polysilicon come from the region. The report detailed how Uyghurs and other minorities were made to live in camps that are “surrounded by razor-wire fences, iron gates, and security cameras, and are monitored by police or additional security.” Factories are located within the camps, and Uyghurs cannot leave voluntarily. And there is evidence that workers are unpaid. One former camp detainee, Gulzira Auelhan, told Canadian journalists that she was regularly shocked with a stun gun and subjected to injections of unknown substances. She felt she was treated “like a slave.”

For Uyghurs in exile, what is happening is clear — a genocide that aims to eliminate the Uyghur language, culture and identity and turn their homeland into another Chinese region. Mosques and old Uyghur neighborhoods are being replaced by hotels and high-rise apartments and populated by members of China’s dominant ethnic group: the Han Chinese. Mandarin Chinese is now the primary language taught in schools. “Putting it bluntly, the Uyghur genocide is more real and immediate than climate change,” says Arslan Hidayat, a Uyghur Australian program director at the nonprofit Campaign for Uyghurs. He believes that stories like Auelhan’s barely scratch the surface of what’s happening. 

“It’s still not widely known that Uyghur forced labor is used in the supply chain of solar panels,” said Hidayat.

Seaver Wang is a climate director at the California-based Breakthrough Institute, which published another report on the connections between Xinjiang and solar energy last year. Wang hoped the wave of research on the issue would be a wake-up call for the industry and for climate and energy nonprofits. But the reaction has been mixed at best. “Labor and some industry groups were very eager to talk about the issue,” he said. “But other constituencies, like solar developers and areas of the climate advocacy movement, who are really prioritizing deployment and affordability, didn’t want to rock the boat.”

Indeed, major environmentalists and climate groups have said little about the origins of so much of the world’s solar energy technology, possibly out of fear of inadvertently harming the expansion of clean energy. Recent reports on solar in China from international organizations including Ember, Global Energy Monitor and Climate Energy Finance make no mention of the solar industry’s links to Xinjiang. 

The same is true for major American nonprofits. Even as they strongly support the expansion of solar, Sierra Club, 350.org, NRDC, Environmental Defense Fund and the National Wildlife Federation make no mention of Uyghur forced labor on their websites or social media. None agreed to speak to me for this story. 

Only the Union of Concerned Scientists mentions issues related to Uyghur forced labor on their website and agreed to be interviewed for this story. “UCS strongly advocates for justice and fairness to be centered in all our climate solutions,” said Rachel Cleetus, policy director for the climate and energy program, via email. “The clean energy economy we are striving to build should not replicate the human rights, environmental and social harms of the fossil fuel based economy.” Cleetus declined to comment on the decisions of its peer organizations not to acknowledge the issue.

Dustin Mulvaney, a professor of environmental studies at California’s San José State University, has a theory about why so many climate advocates and groups hesitate to speak on Uyghur forced labor. “It’s an area that people are uncomfortable talking about because they fear it undermines the objectives of getting more solar,” said Mulvaney. “It’s almost as if people are concerned that any information about solar that could be interpreted as a negative could be amplified through the same networks that are doing climate disinformation.”

To wit, U.S. think tanks like the Heritage Foundation and the Heartland Institute, both heavily right-leaning, have released dozens of blog posts, op-eds and interviews focusing on Uyghur forced labor. These groups are also notorious hubs of climate disinformation.

One headline from a Heartland Institute blog post warned that “China’s Slave Labor, Coal-Fired, Mass-Subsidized Solar Panels Dominate the Planet.” An article on far-right news site Breitbart cautioned that the clean energy clauses in the 2022 Inflation Reduction Act “may fund China’s Uyghur slavery.” Further amplifying the focus on Uyghur forced labor in solar are right-wing media outlets like Daily Signal and Newsmax and the pseudo-educational organization PraegerU.

Alongside mentions of Uyghur forced labor in the solar industry, one typically finds far less factual claims — that the emissions generated throughout the life cycle of solar panels are as bad as fossil fuels, that climate change is not responsible for recent extreme weather events, or that “net zero” and socially responsible investment trends are insider tactics meant to weaken the American economy. Some even push political disinformation. There are claims that President Joe Biden is pro-solar because he has received donations from China or because his son, Hunter Biden, has links to China — and that U.S. climate envoy John Kerry is benefiting personally due to his investments in Chinese solar. 

Organizations like these are spreading climate skepticism, minimizing the threat of climate change, and casting doubt on its links to extreme weather events. This has also been the refrain from elected officials like Republican Sen. Rick Scott of Florida, sponsor of the Keep China Out of Solar Energy Act, a bill that would further prohibit federal funds from being used to buy solar components from Xinjiang.

Another common argument holds that domestic fossil fuel production is better for the economy than importing solar from China. Support for fossil fuels does seem to be a common link across the groups and political figures focused on the issue. In fact, politicians speaking out about Uyghur forced labor in solar are among the top recipients of political donations from the fossil fuel industry. According to data from Open Secrets, a nonpartisan project that tracks political spending, Scott alongside two cosponsors of his Keep China Out of Solar Energy Act — Senators Marco Rubio and John Kennedy — accepted more contributions from the oil and gas industries than almost all other U.S. senators in 2022.

The U.S. is not the only country where this kind of narrative has found a home. Earlier this year, Taishi Sugiyama, who directs research at the Canon Institute for Global Studies, agitated on the issue after officials in Tokyo announced a plan to mandate solar panels on all newly constructed homes in the city. Like conservatives in the U.S., Sugiyama cited the plight of the Uyghurs as a primary reason to divest from solar. But Sugiyama’s think tank is a well known source of climate disinformation in Japan.

“Sugiyama is basically using absolutely any argument he can, real or false, in order to pursue what he’s aiming for in terms of his anti-climate objectives,” said James Lorenz, the executive director of Actions Speak Louder, a corporate accountability nonprofit focused on the climate. Some of Sugiyama’s allies have close links to Japanese companies importing coal, natural gas and petroleum from abroad. Two of the institute’s board members represent Sumitomo and JICDEC, both major importers of fossil fuels in Japan.

Solar panels outside homes in the city of Hokuto in central Japan. Noboru Hashimoto/Corbis via Getty Images.

Early reports about China’s crackdown on ethnic Uyghurs, including the detention of thousands of people as part of a massive “political reeducation” program, emerged in 2017. Dustin Mulvaney, the environmental studies professor, thinks that would have been the optimal time to act. “Had the industry had that traceability in place back then, had they had this conversation back then, they might not find themselves in this situation today,” he said.

But now, six years later, both the climate and the Uyghur human rights crisis have worsened. Implicit in the silence from many climate and environmentalists is the idea that, in order to address climate change, the Uyghur cause may have to be sacrificed. Mulvaney feels that environmental advocates have hesitated to criticize solar or bring up forced labor issues for fear of playing into anti-solar messaging.

Mulvaney has personally experienced this, seeing his critiques being misquoted in right-wing media. “But I don’t think it works that way. I think people are a little too guarded in protecting solar from criticism.”

To the Breakthrough Institute’s Seaver Wang, being forced to choose between reclaiming human rights in Xinjiang and ramping up clean energy quickly enough to address climate change presents a false dichotomy. 

“We’re willing to have open and frank conversations around responsible sourcing everywhere but China,” said Wang. “I recognize that there are climate versus human rights trade-offs, but let’s talk about those trade-offs rather than just prioritizing climate, because it all factors into equity at the end.”

For Uyghurs like Hidayat, who are used to being ignored by not only climate activists but also by progressive politicians, he’s open to any support and is glad to see people like Rick Scott proposing stronger regulations on solar imports from China, even if their motives are less than pure. At the same time, Hidayat is wary that they might be using the Uyghur crisis for their own political benefits, and would welcome more actions from environmentalists. 

“There is nothing clean about using solar panels linked to Uyghur forced labor,” said Hidayat. Instead, he says there needs to be a “change in the definition of what clean energy is. The whole supply chain, from A to Z, the raw materials all the way to its installation, has to be free of human rights abuses for it to actually be defined as green, clean tech.”

How do we get there? Wang wants to see a frank discussion, rather than the silence or politicization that has dominated the debate so far. 

“I do think that we could balance clean energy deployment, meet climate ambitions and address human rights in Xinjiang,” said Wang. “But I know it won’t be easy,” he said. “It’s not an unmitigated win-win.”

The post Why are climate skeptics speaking out about the Uyghur genocide? appeared first on Coda Story.

]]>
Wartime in the ‘digital wild west’  https://www.codastory.com/newsletters/israel-gaza-content-moderation-twitter/ Thu, 09 Nov 2023 14:03:10 +0000 https://www.codastory.com/stayonthestory/will-a-new-regulation-on-ai-help-tame-the-machine/ Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Musk taunts Wikipedia, Sri Lanka flirts with a new censorship tool, and Greek politicians continue to grapple with their spyware problem.

The post Wartime in the ‘digital wild west’  appeared first on Coda Story.

]]>
As Israel continues its advance into Gaza, the need for oversight and accountability around what appears on social media feels especially urgent. Forget for a minute all the stuff online that’s either fake or misinformed. There are reams of real information about this war that constantly trigger the censorship systems of Big Tech companies. 

Consider the subject of terrorism. The biggest players all have rules against content that comes from terrorist groups or promotes their agendas, many of which align with national laws. This might sound uncomplicated, but the governing entity in Gaza, for instance, is Hamas, a designated terror organization in the eyes of Israel and, even more importantly, the U.S., home to the biggest tech companies on earth. Digital censorship experts have expressed well-founded fears that between Big Tech’s self-imposed rules and regional policies like the EU’s Digital Services Act, companies could be censoring critical information such as evidence of war crimes or making it impossible for people in the line of fire to access vital messages.

Although the stakes here couldn’t be higher, we also know that content moderation work is too often relegated to tiny teams within a company or outsourced to third parties.

Companies are typically coy about how this works behind the scenes, but in August the Digital Services Act went into effect, requiring the biggest of the Big Techs to periodically publish data about what kinds of content they’re taking down in the EU and how they’re going about it. And last week, the companies delivered. The report from X showed some pretty startling figures about how few people are on the front lines of content moderation inside the company. It’s been widely reported that these teams were gutted after Elon Musk took over a year ago but I still wasn’t prepared for the actual numbers. The chart below shows how many people X currently employs with “linguistic expertise” in languages spoken in the EU.

X has expertise on fewer than half of the bloc’s official languages, and for most of them, it employs literally one or two people per language. The languages with teams in the double digits are probably explained by a combination of regulation, litigation and political threats that have tipped the scales in places like Germany, Brazil and France. But for a company with this much influence on the world, the sheer lack of people is staggering.

Industry watchers have jumped all over this. “There is content moderation for the English-speaking world, which is already not perfect, and there is the Digital Wild West for the rest of us,” wrote Roman Adamczyk, a network analyst who previously worked with the Institute for Strategic Dialogue. “Will this change in light of the 2024 elections in Finland, Lithuania, Moldova, Romania and Slovakia?” asked Mathias Vermeulen, director of the privacy litigation group AWO. Great question. Here are a few more, in no particular order:

What are people who speak Hungarian or Greek — of which there are about 13 million each in the EU — supposed to make of this? What about all the places in the EU where the Russian language has a big presence, sometimes of the fake news variety? What happens if the sole moderator for Polish gets the flu? Is there any recourse if the two moderators for Hebrew, whose jobs I seriously don’t envy right now, get into an argument about what counts as “terrorist” content or “incitement to violence”? These moderators — “soldiers in disguise” on the digital battlefield, as one Ethiopian moderator recently put it to Coda — have serious influence over what stays up and what comes down.

After reading accounts from moderators working through Ethiopia’s civil war, I shudder to think of what these staffers at X are witnessing each day, especially those working in Arabic or Hebrew. The imperative to preserve evidence of war crimes must weigh heavily on them. But ultimately, it will be the corporate overlords — sometimes forced by the hands of governments — who decide what gets preserved and what will vanish.

GLOBAL NEWS

Elon Musk has once again been taking potshots at the world’s largest online encyclopedia. Two weeks back, he poked fun at the Wikimedia Foundation’s perennial donation drive and then jokingly considered paying the foundation $1 billion to change the platform’s name to — so sorry — “Dickipedia.” It is hard to know where to begin on this one, except to say that while Wikipedia functions on a fraction of the budget that X commands, it takes things like facts and bias a lot more seriously than Musk does and supports 326 active language communities worldwide. In the meantime, Wikipedia’s fate in the U.K. still hangs in the balance. Regulators are sorting out the implementation of the country’s new Online Safety Act, which will require websites to scan and somehow remove all content that could be harmful to kids before it appears online. There’s a lot wrong with this law, including the fact that it will inspire other countries to follow suit.

One recent copycat is Sri Lanka, where the parliament is now considering a bill by the same name. Although proponents say they’re trying to help protect kids online, Sri Lanka’s Online Safety Bill treads pretty far into the territory of policing online speech, with an even broader mandate than its British counterpart. One provision aims to “protect persons against damage caused by communication of false statements or threatening, alarming or distressing statements.” Another prohibits “coordinated inauthentic behavior” — an industry term that covers things like trolling operations and fake news campaigns. A committee appointed by Sri Lanka’s president gets to decide what’s fake. Sanjana Hattotuwa, research director at the New Zealand-based Disinformation Project, has pointed out the clear pitfalls for Sri Lanka, where digital disinfo campaigns have been a hallmark of national politics for more than a decade. In an editorial for Groundviews, Hattotuwa argued that the current draft will lead to “vindictive application, self-serving interpretation, and a license to silence,” and predicted that it will position political incumbents to tilt online discourse in their favor in the lead up to Sri Lanka’s presidential election next year.

Greek lawmakers pushed through a ban on spyware last year, after it was revealed that about 30 people, including journalists and an opposition party leader, were targeted with Predator, a mobile surveillance software made by the North Macedonian company Cytrox. But efforts to get to the bottom of the scandal that started it all — who bought the spyware, and who picked the targets? — have been stymied, thanks in part to the new conservative and far-right elements in parliament. The new government has overhauled the independent committee that was formed to investigate the spyware scandal, in what opposition lawmakers called a “coup d’etat.” And now two of the committee’s original members are being investigated over allegations that they leaked classified information about the probe. When it comes to regulating — in this case, banning — spyware, EU countries probably have the best odds at actually making the rules stick. But what’s happened in Greece over the last 18 months shows that it’s still an uphill battle.

WHAT WE’RE READING

  • Wired’s Vittoria Elliott has a new report on the rise of third-party companies that provide what’s known in the tech industry as “trust and safety” services. A key takeaway of the piece is that when companies outsource this kind of work, it means they’re “outsourcing responsibilities to teams with no power to change the way platforms actually work.” That’s one more thing to worry about.
  • Beloved sci-fi writer and open internet warrior Cory Doctorow brought us a friendly breakdown this week of some really important legal arguments being made around antitrust law and just how harmful Amazon is to consumers and sellers alike. In a word, says Doctorow, it is “enshittified.” Read and learn.

The post Wartime in the ‘digital wild west’  appeared first on Coda Story.

]]>
In Africa’s first ‘safe city,’ surveillance reigns https://www.codastory.com/authoritarian-tech/africa-surveillance-china-magnum/ Wed, 08 Nov 2023 13:33:21 +0000 https://www.codastory.com/?p=48029 Nairobi boasts nearly 2,000 Huawei surveillance cameras citywide. But in the nine years since they were installed, it is hard to see their benefits.

The post In Africa’s first ‘safe city,’ surveillance reigns appeared first on Coda Story.

]]>
Nairobi purchased its massive traffic surveillance system in 2014 as the country was grappling with a terrorism crisis.
Today, the city boasts nearly 2,000 Huawei surveillance cameras citywide, all sending data to the police.
On paper, the system promised the ultimate silver bullet: It put real-time surveillance tools into the hands of more than 9,000 police officers. But do the cameras work?

In Africa’s first ‘safe city,’ surveillance reigns

Lights, cameras, what action? In Nairobi, the question looms large for millions of Kenyans, whose every move is captured by the flash of a CCTV camera at intersections across the capital.

Though government promises of increased safety and better traffic control seem to play on a loop, crime levels here continue to rise. In the 1990s, Nairobi, with its abundant grasslands, forests and rivers, was known as the “Green City in the Sun.” Today, we more often call it “Nairobbery.”

Special series

This is the third in a series of multimedia collaborations on evolving systems of surveillance in medium-sized cities around the world by photographers at Magnum Photos, data geographers at the Edgelands Institute, an organization that explores how the digitalization of urban security is changing the urban social contract, and essayists commissioned by Coda Story.

Our first two essays examined surveillance in Medellín, Colombia and Geneva, Switzerland. Next up: Singapore.

I see it every time I venture into Nairobi’s Central Business District. Navigating downtown Nairobi on foot can feel like an extreme sport. I clutch my handbag, keep my phone tucked away and walk swiftly to dodge “boda boda” (motorbike) riders and hawkers whose claim on pedestrian walks is quasi-authoritarian. Every so often, I’ll hear a woman scream “mwizi!” and then see a thief dart down an alleyway. If not that, it will be a motorist hooting loudly at a traffic stop to alert another driver that their vehicle is being stripped of its parts, right then and there.

Every city street is dotted with cameras. They fire off a blinding flash each time a car drives past. But other than that, they seem to have little effect. I have yet to hear of or witness an incident in which thugs were about to rob someone, looked up, saw the CCTV cameras then stopped and walked away.

Nairobi launched its massive traffic surveillance system in 2014 as the country was grappling with a terrorism crisis. A series of major attacks by al-Shabab militants, including the September 2013 attack at Nairobi’s Westgate shopping complex in which 67 people were killed, left the city reeling and politicians under extreme pressure to implement solutions. A modern, digitized surveillance system became a national security priority. And the Chinese tech hardware giant Huawei was there to provide it. 

A joint contract between Huawei and Kenya’s leading telecom, Safaricom, brought us the Integrated Urban Surveillance System, and we became the site of Huawei’s first “Safe City” project in Africa. Hundreds of cameras were deployed across Nairobi’s Central Business District and major highways, all networked and sending data to Kenya’s National Police Headquarters. Nairobi today boasts nearly 2,000 CCTV cameras citywide.

On paper, the system promised the ultimate silver bullet: It put real-time surveillance tools into the hands of more than 9,000 police officers to support crime prevention, accelerated responses and recovery. Officials say police monitor the Kenyan capital at all times and quickly dispatch first responders in case of an emergency.

But do the cameras work? Nine years since they were installed, it is hard to see the benefits of these electronic eyes that follow us around the city day after day.

Early on, Huawei claimed that from 2014 to 2015, crime had decreased by 46% in areas supported by their technologies, but the company has since scrubbed its website of this report. Kenya’s National Police Service reported a smaller drop in crime rates in 2015 in Nairobi, and an increase in Mombasa, the other major city where Huawei’s cameras were deployed. But by 2017, Nairobi’s reported crime rates surpassed pre-installation levels.

According to a June 2023 report by Coda’s partners at the Edgelands Institute, an organization that studies the digitalization of urban security, there has been a steady rise in criminal activity in Nairobi for nearly a decade.

So why did Nairobi adopt this system in the first place? One straightforward answer: Kenya had a problem, and China offered a solution. The Kenyan authorities had to take action and Huawei had cameras to sell. So they made a deal.

Nairobi’s surveillance apparatus today has become part of the “Digital Silk Road” — China’s quest to wire the world. It is a central component of the Belt and Road Initiative, an ambitious global infrastructure development strategy that has spread China’s economic and political influence across the world. 

This hasn’t been easy for China in the industrialized West, with companies like Huawei battling sanctions by the U.S. and legal obstacles both in the U.K. and European Union countries. But in Africa, the Chinese technology giant has a quasi-monopoly on telecommunications infrastructure and technology deployment. Components from the company make up around 70% of 4G networks across the continent.

Chinese companies also have had a hand in building or renovating nearly 200 government buildings across the continent. They have built secure intra-governmental telecommunications networks and gifted computers to at least 35 African governments, according to research by the Heritage Foundation.

Grace Bomu Mutung’u, a Kenyan scholar of IT policy in Kenya and Africa, currently working with the Open Society Foundations, sees this as part of a race to develop and dominate network infrastructure, and to use this position to gather and capitalize on data that flows through networks.

“The Chinese are way ahead of imperial companies because they are approaching it from a different angle,” she told me. She posits that for China, the Digital Silk Road is meant to set a foundation for an artificial intelligence-based economy that China can control and profit from. Mutung’u derided African governments for being so beholden to development that their leaders keep missing the forest for the trees. “We seem to be caught in this big race. We have yet to define for ourselves what we want from this new economy.”

The failure to define what Africa wants from the data-driven economy and an obsession with basic infrastructure development projects is taking the continent through what feels like another Berlin scramble, Mutung’u told me, referring to the period between the 19th and early 20th centuries that saw European powers increase their stake in Africa from around 10% to about 90%.

“Everybody wants to claim a part of Africa,” she said. “If it wasn’t the Chinese, there would be somebody else trying to take charge of resources.” Mutung’u was alluding to China’s strategy of financing African infrastructure projects in exchange for the continent’s natural resources.

A surveillance camera in one of Nairobi’s matatu buses.

Nairobi was the first city in Africa to deploy Huawei’s Safe City system. Since then, cities in Egypt, Nigeria, South Africa and a dozen other countries across the continent have followed suit. All this has drawn scrutiny from rights groups who see the company as a conduit in the exportation of China’s authoritarian surveillance practices. 

Indeed, Nairobi’s vast web of networked CCTV cameras offers little in the way of transparency or accountability, and experts like Mutung’u say the country doesn’t have sufficient data protection laws in place to prevent the abuse of data moving through surveillance systems. When the surveillance system was put in place in 2014, the country had no data protection laws. Kenya’s Personal Data Protection Act came into force in 2019, but the Office of the Data Protection Commissioner has yet to fully implement and enforce the law.

In a critique of what he described at the time as a “massive new spying system,” human rights lawyer and digital rights expert Ephraim Kenyanito argued that the government and Safaricom would be “operating this powerful new surveillance network effectively without checks and balances.” A few years later, in 2017, Privacy International raised concerns about the risks of capturing and storing all this data without clear policies on how that data should be treated or protected.

There was good reason to worry. In January 2018, an investigation by the French newspaper Le Monde revealed that there had been a data breach at the African Union headquarters in Addis Ababa following a hacking incident. Every night for five years, between 2012 and 2017, data downloaded from AU servers was sent to servers located in China. The Le Monde investigation alleged the involvement of the Chinese government, which denied the accusation. In March 2023, another massive cyber attack at AU headquarters left employees without access to the internet and their work emails for weeks.

The most recent incident brought to the fore growing concerns among local experts and advocacy groups about the surveillance of African leaders as Chinese construction companies continue to win contracts to build sensitive African government offices, and Chinese tech companies continue to supply our telecommunication and surveillance infrastructure. But if these fears have had any effect on agreements between the powers that be, it is not evident.

As the cameras on the streets of Nairobi continue to flash, researchers continue to ponder how, if at all, digital technologies are being used in the approach to security, coexistence and surveillance in the capital city.

The Edgelands Institute report found little evidence linking the adoption of surveillance technology and a decrease in crime in Kenya. It did find that a driving factor in rising crime rates was unemployment. For people under 35, the unemployment rate has almost doubled since 2015 and now hovers at 13.5%.

In a 2022 survey by Kenya’s National Crime Research Centre, a majority of respondents identified community policing as the most effective method of crime reduction. Only 4.2% of respondents identified the use of technology such as CCTV cameras as an effective method.

And the system has meanwhile raised concerns among privacy-conscious members of society regarding potential infringement upon the right to privacy for Kenyans and the technical capabilities of these technologies, including AI facial recognition. The secrecy often surrounding this surveillance, the Edgelands Institute report notes, complicates trust between citizens and the state.

It may be some time yet before the lights and the cameras lead to action.

Photographer Lindokuhle Sobekwa’s portable camera obscura uses a box and a magnifying glass to take images for this story.

The post In Africa’s first ‘safe city,’ surveillance reigns appeared first on Coda Story.

]]>
Why India is awash with anti-Palestine disinformation https://www.codastory.com/newsletters/india-hindu-nationalism-gaza/ Tue, 07 Nov 2023 15:00:49 +0000 https://www.codastory.com/?p=48188 Disinfo Matters looks beyond fake news to examine how the manipulation of narratives and rewriting of history are reshaping our world.

The post Why India is awash with anti-Palestine disinformation appeared first on Coda Story.

]]>
Hello again,

In the last incarnation of this newsletter, Coda’s editor-in-chief Natalia Antelava worked each week to examine the disinformation being generated around Russia’s invasion of Ukraine. Deliberate distortion of the truth had long been a weapon in Vladimir Putin’s arsenal, but the war laid bare just how ineffective we were at countering it. Fact-checking alone is of little use in the face of targeted lies intended to sow division and advance particular narratives. 

We relaunch now as the war in Gaza appears to have destroyed any lingering optimism about citizen journalism, open-source intelligence and the free flow of information helping to dispel disinformation rather than be hijacked by bad actors. In this newsletter, we will continue to scrutinize narratives and the way information is deployed by people in power.

I’m based in New Delhi, which is fast becoming one of the disinformation capitals of the world. We will be watching India closely, but the Coda team is scattered around the globe — in Rome, Istanbul, London, Washington, D.C. and beyond. The patterns of misinformation we will examine here are global as are their impacts.

The online discourse is dominated by unreliable narrators as never before. 2024 is an election year in India and the United States, and sophisticated disinformation is likely to play major roles in both. Understanding and shedding light on how narrative is manipulated and why is work we all have to be prepared to do.

Why India is awash with anti-Palestine disinformation

Talk of unreliable narrators brings us with a sad inevitability to India’s Hindu nationalist troll army. On Sunday, October 29, near the coastal city of Kochi in the southern state of Kerala, a meeting of Jehovah’s Witnesses was bombed. Three people died and more than 50 were injured. 

Almost immediately, Hindu nationalists — including those within India’s governing Bharatiya Janata Party — went online to cast blame. At the time the bomb went off in Kerala, the state’s chief minister was in Delhi at a protest to express solidarity with Palestine — India’s traditional position, albeit one that is now contrarian because the BJP stands firmly with Israel. Rajeev Chandrasekhar, a minister in Prime Minister Narendra Modi’s cabinet, wrote on X that the Kerala chief minister’s “shameless appeasement politics” meant he was “sitting in Delhi and protesting against Israel, when in Kerala open calls by Terrorist Hamas for Jihad is causing attacks and bomb blasts on innocent Christians.” Chandrashekhar, despite his important role as a government minister, seemed to have no qualms speculating about the Kochi bombing and assigning guilt. 

Kerala is governed by a left-wing coalition, making it a favorite target of Hindu nationalist scorn. Amit Malviya, head of the BJP’s National Information and Technology department, followed his party colleague’s lead. The Kerala chief minister “seems more worried about Israel defending itself against Hamas, a terrorist group,” wrote Malviya, than Christians being attacked in Kochi. Alongside the BJP bigwigs, a chorus of Hindu nationalists made their feelings clear.  

The day before the bombings, in another part of Kerala, a pro-Palestine rally had been held. Khaled Mashal, the former head of Hamas, addressed the crowd virtually from his home in Qatar. “Together,” he said, “we will defeat the Zionists.” Posting a video of the rally on X, a popular Hindu nationalist account drew a link with the bombing of the Jehovah’s Witness meeting. “Jews are targeted,” the account claimed falsely. “Do we even need an inquiry to know who did it???” Of course, Jehovah’s Witnesses are not Jewish and, as it turned out, the Kochi bombing had nothing to do with Muslims, much less Hamas. 

A former Jehovah’s Witness confessed on Facebook and then to the Kerala police that he was behind the bombings. The police are currently verifying his claims. But Chandrasekhar, the cabinet minister, doubled down. Quoting former U.S. Secretary of State Hillary Clinton, he wrote on X: “You can’t keep snakes in your backyard and expect them only to bite your neighbors. You know, eventually those snakes are going to turn on whoever has them in the backyard.” In 2011, Clinton warned the Pakistani government about harboring terrorist networks. Chandrasekhar used her words to argue that “appeasement politics” – shorthand for the supposed liberties extended to minorities, particularly Muslims, at the expense of Hindus – had somehow led to the Kochi blast. 

Blaming Muslims for the Kochi bombing, regardless of the facts, is in keeping with the disinformation techniques frequently used by Hindu nationalists. Hindu nationalist trolls have been prolific and persistent spreaders of false anti-Palestinian information about the war in Gaza. It advances their narrative that Islam and terror are synonymous and that India, with its large Muslim minority, is in the same boat as Israel. 

This is a new, specifically Hindu nationalist position, and it has never been the official Indian position. In fact, India, with its long standing desire to be a leading voice of the decolonized Global South, has always supported the Palestinian people’s right to self-determination. India  was first among non-Arab countries to forge diplomatic relationships with Yasser Arafat’s Palestine Liberation Organization (as far back as 1975) and promptly recognized Palestinian statehood in 1988, after Arafat’s declaration of independence. 

It wasn’t until 1992 that India formally established diplomatic relations with Israel. In 2017, Modi became the first Indian prime minister to visit Israel, signaling his desire to forge a deep friendship between countries that he said had much in common. Modi’s warmth towards Israel reflected both India’s relatively recent reliance on Israeli defense and cybersecurity products — spyware among them — as well as the admiration that Hindu nationalists have for what they see as a muscular state unafraid of militarily asserting its interests. Israel, Hindu nationalists say, is a model for their own dream of establishing India as a Hindu nation.

It’s an ideological position that helps explain why on October 27, India chose to abstain rather than vote, like most other countries, to pass a non-binding resolution to seek a “humanitarian truce” in Gaza. Sharad Pawar, a veteran politician, criticized India’s abstention as “the result of total confusion in the Indian government’s policy.” 

The Modi government’s official line is that it has suffered intensely from terrorism and now takes a “zero-tolerance approach to terrorism.” But Islamophobia is at the heart of Hindu nationalist support for Israel’s war in Gaza. And India’s Hindu nationalist trolls appear to be willing to tell whatever lie is necessary to advance their single-point agenda. “What Israel is facing today,” posted the official BJP account on X on October 7, “India suffered between 2004-2014. Never forgive, never forget.” 

This is politicized misinformation by the governing party of India. The years referred to span the two terms of Modi’s predecessor Manmohan Singh, the message being that without the BJP India is vulnerable to Islamic terror. Not surprisingly, a BJP member later argued that “Hindu nationalists see Israel’s war on Gaza as their own.” 

Past Indian governments have labeled insurgents of all religions (and none) as terrorists at one time or another, and terrorist activity in India far predates 2004. But it suits the BJP to act as if it alone can protect Indians from terror. By claiming that Indian Hindus are in the same existential struggle as Israeli Jews — both facing down Islamic terrorism — Hindu nationalists, including those in government, are advancing their narrative that India must abandon its constitution and become a Hindu nation. War in Gaza gives them the opportunity to use misinformation tactics that have been perfected in domestic politics on the global stage.

The BJP’s chokehold on information

Last month, the X account belonging to the Indian American Muslim Council was censored in India after a request from the Indian government. This effectively means that users in India will be barred from seeing any IAMC tweets, as well as those of the IAMC’s ally, the Washington, D.C.-based advocacy group Hindus for Human Rights. Both organizations have been critical of Indian government policies and drawn attention to minority rights and caste issues that Modi sweeps under the carpet on his visits to Western capitals. “We were not expecting it,” IAMC’s Executive Director Rasheed Ahmed told my colleague Avi Ackermann about being booted off X in India. “But we were not surprised.” By “complying with these censorship requests,” Ahmed wrote in an email, “X and Elon Musk have effectively abetted the Indian Government’s effort to expand its authoritarian censorship campaign overseas.” 

The Indian government is trigger happy when it comes to depriving people of access to information, shutting down the internet a world-leading 84 times in 2022. It has blocked the social media accounts of credible if critical sources, including journalists, on the grounds of national security. At the same time, the government ignores trolling and the spreading of disinformation by its Hindu nationalist supporters. And — in the words of Apar Gupta, founder of the Delhi-based Internet Freedom Foundation — has framed digital data laws to “enable unchecked state surveillance.” The Modi government is a disinformation triple threat. 

WHAT WE’RE READING:

  • This piece by Nilesh Christopher in Rest of World is simultaneously funny and scary, though by the end more scary than funny. In India, Instagram reels are being made with voice cloning tools powered by artificial intelligence that show Modi singing hit songs in numerous Indian languages from Punjabi to Tamil. As Christopher points out, “the videos, though lighthearted, serve a larger political purpose.” By cloning his voice, Modi can be made accessible to parts of the country where most people don’t speak Hindi, the language in which Modi gives most of his speeches. With elections coming up next year, this could be a boon for him in south India, where Modi has little support.   
  • “Taking a side in a war does not require taking positions on a work of fiction,” wrote Pamela Paul in The New York Times about the decision to not publicly honorthe Palestinian author Adania Shibli at the Frankfurt Book Fair for having won a German literary prize. In a different life, I edited a couple of short stories by Shibli for a little magazine. When I reached out to her, Shibli was gracious enough to thank me (twice!) for my concern. As for the cancellation of the ceremony — we truly live in morally vacuous times.

The post Why India is awash with anti-Palestine disinformation appeared first on Coda Story.

]]>
Как «Мемориал» выживает после ликвидации https://www.codastory.com/ru/memorial/ Tue, 07 Nov 2023 11:36:44 +0000 https://www.codastory.com/?p=47880 Несмотря на решение Верховного суда РФ о ликвидации, «Мемориал» продолжает работу в России

The post Как «Мемориал» выживает после ликвидации appeared first on Coda Story.

]]>
В декабре 2021 года, за несколько месяцев до вторжения России в Украину, Верховный суд РФ вынес решение о ликвидации старейшей и крупнейшей российской правозащитной организации «Мемориал». В день, когда «Мемориал» стал лауреатом Нобелевской премии мира в 2022 году, силовики провели обыски и разгромили московский офис организации.

Несмотря на репрессии со стороны властей, «Мемориал» продолжает работу. Более того, его сотрудники, возглавляемые уважаемыми историками в возрасте, не просто предотвратили гибель организации, но и проводят её через самые сложные времена для инакомыслящих россиян.

У организации нет офиса и юридического статуса в России, а банковские счета заморожены. Но во времена, когда почти все независимые российские СМИ работают в изгнании, а критики Кремля сидят либо в тюрьме, либо заграницей, «Мемориал» продолжает делать заметную и важную работу: издает книги, следит за ходом судебных процессов над украинскими военнопленными в России, бесплатно консультирует родственников репрессированных в советское время, выступает в защиту растущего списка политических заключенных в России и расширяет свои представительства за пределами страны.

Несмотря на все ограничения, «Мемориал» не скрывает свою деятельность. Билеты на оставшиеся экскурсии «Топография террора» раскупаются практически сразу. Маршрут одной из экскурсий проходит прямо у порогов «Бутырки», ныне СИЗО №2 — печально известной тюрьмы. В советское время здесь сидели диссиденты, а теперь экскурсия «Мемориала» заканчивается тем, что участники садятся за стол и пишут письма новому поколению россиян, арестованных по политическим делам и ожидающих суда в «Бутырке».

«Наша работа не могла прекратиться ни на один день», – говорит Ирина Щербакова, историк и соосновательница «Мемориала».

Ежегодная акция «Возвращение имен», во время которой люди выстраиваются в очередь и зачитывают имена людей, убитых советским режимом, проходит 29 октября. Акция была создана в 2007 году, тогда она проводилась прямо перед зданием бывшей штаб-квартиры КГБ в Москве. В последние несколько лет московские власти не дают разрешения на проведении акции «Мемориала» и теперь она проходит в городах по всему миру.

Война в Украине создала совершенно новую реальность и еще больше сфокусировала внимание российских властей на работе «Мемориала», который много лет расследует преступления советской эпохи и привлекает внимание к политическим репрессиям в современной России.

Организации удалось проработать в России столько времени благодаря стратегии, которую разработали основатели. Как любят напоминать участники, «Мемориал» — это не отдельная организация, а движение, которое с момента своего основания в 1987 году превратилось в разветвленную децентрализованную сеть организаций и отдельных людей. 

Сейчас по всему миру работает более 200 членов и волонтеров «Мемориала», в России их осталось чуть меньше сотни. Поскольку каждое отделение зарегистрировано независимо, для полного закрытия сети внутри страны потребовалось бы 25 отдельных судебных дел. Офисы движения есть также в Бельгии, Германии, Израиле, Италии, Литве, Франции, Чехии, Швейцарии, Швеции и в Украине. В начале этого года две закрытые российские организации «Мемориала» перерегистрировались за пределами страны под новыми названиями в Швейцарии и Франции.

«Это уже с самого начала было ясно, что мы не хотели вертикали», – поясняет Щербакова. «Мы считали, что это grassroots (низовая — прим. ред) история. Если бы это была вертикаль, то ее давно бы уже российская власть бы уничтожила».

Зарубежные филиалы «Мемориала» долгое время состояли в основном из местных историков, изучающих советский период, но сейчас в них появляются и сотрудники, раньше работавшие в России. 

За последние полтора года офис в Праге стал своего рода новым перевалочным пунктом. В октябре 2022 года, член правления и заведующий библиотекой «Мемориала» Борис Беленкин в возрасте 70 лет покинул Россию и переехал в Прагу. Теперь здесь останавливаются сотрудники «Мемориала» и приезжие ученые, проводятся семинары.

Из пражского офиса «Мемориал» вновь запускает одну из своих самых любимых программ — конкурс исторических работ старшеклассников. Его цель – побудить школьников заняться самостоятельными исследованиями российской истории ХХ века. До 2021 года конкурс проводился по всей России, а финалисты конкурса прилетали в Москву, чтобы представить свои работы в головном офисе организации. Для многих школьников из дальних регионов это была единственная возможность увидеть столицу. Со временем российские школы отказались от участия в конкурсе, поддавшись давлению местных чиновников и «патриотически настроенных» родителей.

Внутри России давление на сотрудников продолжает усиливаться. В мае, председатель пермского «Центра исторической памяти» (правопреемника местного «Мемориала») Александр Чернышов был арестован по обвинению в хулиганстве и с тех пор находится в СИЗО. Отделения в Екатеринбурге и других городах подвергаются постоянным преследованиям и произвольным штрафам со стороны местных властей, что ставит некоторые из них на грань закрытия. Известный историк «Мемориала» Юрий Дмитриев отбывает 15-летний срок в тюрьме по делу, которое, по мнению «Мемориала», является политически мотивированным. Сейчас и Чернышов, и Дмитриев содержатся в учреждениях, которые в свое время были частью системы советских лагерей ГУЛАГ.

В Москве девять членов «Мемориала» стали объектами уголовного преследования. В мае российские власти предъявили обвинение председателю совета правозащитного центра «Мемориал» Олегу Орлову. Следствие обвинило Орлова в «дискредитации ВС РФ» из-за статьи о «кровавой войне, развязанной режимом Путина в Украине». 11 октября суд признал Орлова виновным, оштрафовал его на 150 тысяч рублей, а прокурор запросила провести психиатрическую экспертизу, сославшись на его «обостренное чувство справедливости, отсутствие инстинкта самосохранения, позерство перед гражданами» (запрос был отклонен судом).

В «Мемориале» считают, что уголовные дела против сотрудников возбуждаются из-за защиты политических заключенных, которой занимается организация. Правозащитное отделение Центр «Мемориал» ведет базу данных о людях, находящихся в заключении по политическим причинам, на которую часто ссылаются международные организации. Центр также регулярно публикует свежую информацию о заключенных и их делах, интервью с членами их семей и организует кампании по сбору писем. Сегодня в списке «Мемориала» 610 человек — за последние пять лет их число увеличилось в три раза.

Щербакова, директор «Мемориала» и историк Советского Союза, говорит, что сейчас число политзеков выше, чем во времена позднего СССР. «На мой взгляд, нынешняя ситуация более страшная, жестокая, чем была тогда», – считает Щербакова. 

«Мемориал» находится под прицелом Кремля с тех пор, как в 2014 году осудил аннексию Крыма и участие России в войне на Донбассе. Наиболее мощным юридическим инструментом российского правительства в таких случаях является закон об иностранных агентах, который позволяет признавать «иноагентами» компании и лиц, формально получающих финансирование из-за рубежа. Принятый в 2012 году и расширенный в 2020 году, этот закон предусматривает лишение свободы на срок до пяти лет за несоблюдение маркировок и финансовой отчетности.

Российские власти также используют закон об иностранных агентах для преследования физических лиц. В середине октября в Казани была задержана Алсу Курмашева, журналистка «Радио Свободы» с гражданством России и США. Российские власти обвинили ее в том, что она не зарегистрировалась в качестве иностранного агента во время поездки в Россию по семейным обстоятельствам. Курмашевой также грозит до пяти лет лишения свободы.

Как закон об иноагентах стали использовать автократы по всему миру

Российский закон об “иностранных агентах”, принятый в 2012-м году, известен уже далеко за пределами России.

Он дает властям право включать в список “иностранных агентов” НПО и частные лица, которые получают поддержку из-за рубежа и занимаются политической деятельностью. Попав в список, они рискуют получить тюремный срок на до пяти лет, если не сообщают о своей деятельности в точном соответствии с требованиями закона.

Правозащитники активно критикуют закон, называя его новейшим инструментом подавления инакомыслия и преследования НПО и журналистов. Но идеи, лежащие в его основе, уже взяли на вооружение авторитарные лидеры и антидемократические режимы по всему миру.

«Сейчас такое, шпионаж, контрреволюционеры, троцкист — это все слилось в слове иностранный агент», – говорит Беленкин, директор библиотеки «Мемориала» в Праге, включенный в список иностранных агентов в 2022 году.

Чтобы еще сильнее надавить на «Мемориал», в 2021 году российские власти подали новый иск в Верховный суд. Там утверждалось, что «Мемориал» нарушил закон, не промаркировав несколько постов в социальных сетях плашкой «иноагента». Несмотря на официальное обвинение в отсутствии маркировок, в финальном слове генеральный прокурор Алексей Жафяров выступил с резкой критикой организации, как бы объясняя, почему у российских властей есть претензии к «Мемориалу» на самом деле. 

«Мемориал», спекулируя на теме политических репрессий XX века, создает лживый образ СССР как террористического государства, обеляет и реабилитирует нацистских преступников и изменников Родины, на чьих руках кровь советских граждан», – заявил Жафяров, попутно высмеяв «Мемориал» за то, что организация «претендует на роль совести нации», но почему-то «не хочет в каждой своей публикации напоминать, кем они проплачены».

«Почему вместо гордости за страну, освободившую мир от фашизма, нам предлагают каяться за свое, как оказалось, беспросветное прошлое?», – продолжил Жафяров.

Один из представителей защиты «Мемориала» Григорий Вайпан сказал, что выступление генпрокурора Жафярова показало истинную мотивацию властей для ликвидации «Мемориала». Вайпан говорит, что вместо того, чтобы пояснить, в каких постах «Мемориал» не поставил плашку «иноагента», гособвинение заявило: «Мы должны закрыть “Мемориал”, потому что “Мемориал” проводит нарратив, который не отвечает интересам государства. Надо закрыть “Мемориал”, потому что “Мемориал” мешает государственному нарративу о том, что “мы, российское государство, победившее во Второй мировой войне, неподотчетны миру”».

«Перечитывая заключительное слово сейчас, оно имеет гораздо больше смысла чем тогда. То, что сказал прокурор, было прологом к войне», – считает Вайпан. 

«Мемориал» проиграл апелляцию по ликвидации организации в марте 2022 года, когда российские войска шли на Киев. Как и другие представители гражданского общества в России, после начала войны члены «Мемориала» начали задаваться вопросом, что пошло не так.

Российское вторжение привело «Мемориал» в том числе и к сложному, порой спорному, внутреннему пересмотру собственного наследия. 

«Мы пытаемся понять, что было не так в нашей работе на протяжении предыдущих 35 лет: как мы не выстроили взаимодействия с российским обществом, как мы не видели каких то сложных форм, дискриминации и угнетения, которые привели к сегодняшней войне», – говорит Александра Поливанова, куратор культурных программ «Мемориала». «Стало понятно, что мы не видели какие-то слепые зоны в нашей собственной работе. И с чем-то мы не вполне точно взаимодействовали, не точно работали, что стало возможным, что мы все допустили такую ужасную войну».

В 2022 году Нобелевская премия мира была присуждена российскому «Мемориалу», украинским Центру гражданских свобод и правозащитнику из Беларуси Алесю Беляцкому. Реакция в мире была неоднозначной. Директор украинской организации Александра Матвийчук высоко оценила работу «Мемориала», но отказалась давать интервью вместе с Яном Рачинским, который принимал награду от имени «Мемориала» в Осло. Посол Украины в Германии назвал признание организаций из трех стран «поистине разрушительным», учитывая контекст продолжающейся войны, начатой Россией частично с территории Беларуси.

Тем не менее, не все в «Мемориале» считают, что организацию следует оценивать через призму войны и жесткого поворота России к авторитаризму.

«Небольшая общественная организация, которая, пусть и сетевая, но имеющая очень ограниченный ресурс, конечно, не могла здесь ничего изменить и сделать», – говорит Беленкин, директор библиотеки «Мемориала». 

Создательница «Топографии террора» Александра Поливанова, которая на поколение моложе многих руководителей «Мемориала», считает, что организация должна пересмотреть свое собственное наследие в связи с войной. По ее словам, обсуждение этой темы среди членов «Мемориала» идет сложно. Её работа также претерпевает изменения: одна из новых экскурсий посвящена украинскому правозащитнику Петру Григоренко, который будучи генерал-майором вооруженных сил СССР, выступил против советского вторжения в Чехословакию, был признан советскими властями невменяемым, подвергся карательному психиатрическому лечению и стал диссидентом. 

«Этот сюжет нам не казался таким уж краеугольным, таким уж важным раньше. Но вот сейчас мы поняли, что это для нас очень важная история, чтобы сейчас рассказать», – говорит Поливанова.

После начала российского вторжения обновленные экскурсии, включающий в себя и программу о жизни Григоренко в Москве, переживают всплеск популярности. В сентябре 2022 года Поливанова добавила к экскурсии по Сандармоху в Карелии чтение украинских стихов, написанных авторами, погибшими во время сталинских чисток. За последние полтора года Поливановой пришлось втрое увеличить количество еженедельных экскурсий, и все равно она не справляется со спросом. Поливанова добавляет, что экскурсии стало редким местом, где участникам можно обсудить войну. По ее словам, во многих экскурсиях участники начинают брать инициативу в свои руки, проводя прямые сравнения между жестокостью советских репрессий и новостями о российских зверствах в Буче и Мариуполе. 

Экскурсии привлекают и участников другого «лагеря». Осенью прошлого года «патриотические» активисты несколько недель подряд мешали проведению экскурсий, угрожая присутствующим и называя членов «Мемориала» предателями. С тех пор «Мемориал» стал требовать от участников указывать ссылки на свои аккаунты в социальных сетях при регистрации.

Пока люди активно ходят на экскурсии «Мемориала», российские власти не прекращают давления на организацию и пытаются свести на нет многолетние усилия по привлечению к ответственности тех, кто совершил преступления при советской власти.

В сентябре перед зданием Службы внешней разведки России была установлена статуя Феликса Дзержинского, создателя ВЧК — печально известной «комиссии по борьбе с контрреволюцией и саботажем». Статуя представляет собой почти точную копию памятника Дзержинскому, который десятилетиями стоял перед зданием КГБ СССР. В 1991 году россияне, собравшиеся на митинг по случаю распада СССР, снесли его. Сегодня архитектор красного террора снова стоит в Москве.

Перевод и адаптация — Иван Макридин.

The post Как «Мемориал» выживает после ликвидации appeared first on Coda Story.

]]>
The crackdown on pro-Palestinian gatherings in Germany https://www.codastory.com/rewriting-history/crackdown-pro-palestinian-gatherings-germany/ Mon, 06 Nov 2023 16:45:54 +0000 https://www.codastory.com/?p=47972 A ban on protests is raising deep questions about who is considered part of the nation and what, exactly, Germany has learned from its history.

The post The crackdown on pro-Palestinian gatherings in Germany appeared first on Coda Story.

]]>
On October 27, a rainy Friday evening in Berlin, as Israel bombed Gaza with new intensity before the launch of its ground invasion, I arrived at Alexanderplatz for a rally that had already been canceled. “Get walking now,” ordered one police officer in German. “You don’t need to be here,” shouted another in English. A father and daughter walked away from the police. He held her hand. She dragged a sign written in a shaky child’s script. “Ich bin keine Nummer.” I am not a number.

Why did we write this story?

Germany has banned most public gatherings in support of Palestinians. This has sparked a crisis around civil liberties and is prompting the question of who has a right to be part of the public conversation.

The police had called off the rally, “Berlin’s Children for Gaza’s Children,” five hours before it began because of “the imminent danger that at the gathering there will be  inflammatory, antisemitic exclamations; the glorification of violence; [and] statements conveying a willingness to use violence and thereby lead to intimidation and violence.” Since October 7, when Hamas attacked Israel, this formulation of alarming possibilities has been used to preemptively ban about half of all planned public protests with presumed Palestinian sympathies.

“It was for dead kids,” I heard one woman say to another, in a kind of disbelief that this could have been objectionable. The rally disbanded peacefully — but at that night’s other canceled protest, a gathering of 100 people outside Berlin’s Reichstag, police deployed pepper spray and forcibly detained 74 people.

The woman’s shock registered a new reality that is coalescing in Germany. What happens when basic rights seem to conflict with Germany’s vaunted culture of “coming to terms with the past”  — often interpreted as a call for anti-antisemitism? Recent events have raised troubling questions about who is considered part of the nation and what, exactly, Germany has learned from its history.

Police forces stand between counter-protesters and a pro-Palestine rally in Cologne, Germany on November 1, 2023. Ying Tang/NurPhoto via Getty Images.

Following the October 7 assault in which Hamas massacred 1,400 men, women, and children, German Chancellor Olaf Scholz expressed his condolences for the victims, condemned the attacks and proclaimed his solidarity with Israel. He reasserted the 2008 proclamation of his predecessor, Angela Merkel, that the protection of Israel is part of Germany’s “Staatsraison,” or part of the country’s reason for existence. The German government has remained steadfast in its support, even as Israel’s bombing campaign on Gaza has injured and killed high numbers of civilians — the latest death toll sits at 10,022 people, more than 4,000 of them children.

There has been little official sympathy for the plight of Gazans. But Germany is home to the largest Palestinian diaspora in Europe — an estimated 40,000 to 100,000 people — and people across the country have come together in solidarity with Palestine for both spontaneous and registered protests since the beginning of the conflict. In response, cities across Germany have tried to clamp down on these demonstrations, though the courts have overturned several of these attempts as illegal. In Berlin, bans have been issued against protests with titles such as  “Peace in the Middle East”; “Jewish Berliners Against Violence in the Middle East,” a rally organized by Jewish Voice for a Just Peace in the Middle East, a Jewish organization; and “Youth Against Racism,” which was called after a high school teacher hit a student who had brought a Palestinian flag to school. Throughout, there have been shocking scenes of police brutalizing protestors.

Those who advocate for the bans point to incidents of people gathering on Sonnenallee, a central avenue in Berlin’s Neukoelln district, in support of the Hamas attack on October 7. One especially notorious event involved about 50 men who responded to the call of the Samidoun Palestinian Prisoner Solidarity Network “to celebrate the victory of resistance” by sharing baklava on the street. Berlin’s police treated it as a potentially criminal matter, noting on X, formerly known as Twitter, that they would “carry out the necessary measures.” Newspapers reported that the Israeli ambassador, Ron Prosor, called the men who had gathered “barbarians.”

Beyond these incidents, German politicians have seemingly competed among themselves to see who can promote anti-antisemitism the loudest — and who can be the harshest on the Muslim minority. Nancy Faeser, a government cabinet minister, urged that the government “use all legal means to deport Hamas supporters.” The leader of Germany’s center-right party, the Christian Democratic Union, Friedrich Merz declared, “Germany cannot accept any more refugees. We have enough antisemitic men in this country.” Scholz, the chancellor, piled on: “Too many are coming,” he said. “We must finally deport on a grand scale.”

A police officer carries a Palestinian keffiyeh to a police car in Berlin’s Neukolln district. Sebastian Gollnow/picture alliance via Getty Images.

These are not wholly new tendencies in Germany. Last year, authorities in Berlin banned all public commemorations of the Nakba, the mass displacement of hundreds of thousands of Palestinians in 1948 after the founding of the state of Israel. Earlier this year, German police admitted in court that when they were enforcing the ban, they had simply targeted people who “looked Palestinian.” However, Berlin schools’ decision to forbid students from wearing the keffiyeh and other Palestinian symbols is an escalation that led even a member of Scholz’s own party to question if it could possibly be legal.

Since reunification in 1990, Germany’s national identity has been founded upon “coming to terms with the past.” That is, taking collective responsibility for the Holocaust and taking steps to ensure that it cannot happen again. Central to this protection of Jews has been the enforcement of anti-antisemitism at home, and, internationally, the support of Israel: Germany’s “Staatsraison.”

This culture of remembrance, however, holds little room for non-ethnic Germans. Coming to terms with the past requires that everyone shares the same past. The Muslim minority, for instance — most of whom arrived after 1945 — have found themselves freighted with the accusation of antisemitism for failing to identify with German guilt for the Holocaust. This is not to say that there is no antisemitism within the Muslim minority, but when the center-left Vice Chancellor Robert Habeck insisted in a recent speech that Muslims must distance themselves from antisemitism — or, in some cases, face deportation — he reinscribed the idea of the Muslim minority overall as antisemitic until proven otherwise. Muslims, and particularly Palestinians, have to prove that they deserve to be part of Germany.

The German press has inflamed the situation. Der Spiegel has peddled base stereotypes about Germany’s Muslims, and Bild has published a manifesto declaring that “we are experiencing a new dimension of hatred in our country — against our values, democracy, and against Germany.” But it isn’t just conservative publications pushing these narratives — the left-leaning Die Zeit recently published a piece that questioned whether Muslim immigrants could ever become “civilized.” And the leftist newspaper Taz has published editorials that purport to connect Palestinians with hate and Nazism. When during a speech at the Frankfurt Book Fair, the Slovenian philosopher Slavoj Zizek pleaded for the ethical imperative to think about both Israelis and Palestinians, he was accused of defending Hamas’ crimes.

Highly publicized antisemitic incidents — a Molotov cocktail thrown at a Berlin synagogue and Stars of David painted on homes — has further roiled Germany. Some Jews have said they are afraid to visit their temples. “Germany is a safe country for Jews,” Josef Schuster, the president of the Central Council of Jews, recently affirmed, noting his approval of Germany’s anti-Palestinian measures. “In my eyes, the security forces are doing everything to make sure that doesn’t change. Even if the threat in Germany currently comes more from the Arabic side than from the extreme right.”

However, other Jews in Germany have argued that Schuster misrepresents the real threat. A recent open letter from more than 100 Jewish artists and intellectuals in Germany — full disclosure: I am a signatory — cited the government’s own statistics, which paint a different picture about the risk of pro-Palestinian protests: “the perceived threat of such assemblies grossly inverts the actual threat to Jewish life in Germany, where, according to the federal police, the ‘vast majority’ of anti-Semitic crimes — around 84 percent — are committed by the German far right.”

For Palestinians, cultural institutions have largely shut their doors. An award ceremony for Palestinian writer Adania Shibli at the Frankfurt Book Fair was indefinitely postponed. In Berlin, Maxim Gorki Theater called off upcoming performances of its long-running and much celebrated “The Situation,” which gave voice to the experiences of Arabs, Palestinians and Jewish Israelis. A letter about the decision described how “war demands a simple division into friend and enemy.” Berlin’s Haus für Poesie canceled an upcoming launch party for “The Arabic Europe,” a collection of poetry edited by the Syrian-Palestinian poet Ghayath Almadhoun.

A Palestinian doctor and activist told me that the situation of Palestinians in Germany is one of “collective loneliness.” He asked to be called Nazir — there is a risk of professional repercussions for showing support for Palestinians. “The feeling is not only that we are losing family,” Nazir explained, “not only that a genocide is being done, not only that we have so much to fight with our own losses and pain, but we are not even allowed to mourn publicly. We are not allowed to speak up. We are not allowed to make demonstrations for the ones who are being killed in silence. And this is a whole different level of oppression, this state of oppression in Germany.”

A protester confronts riot police at a pro-Palestinian demonstration on Sonnenallee in Berlin’s Neukoelln district on October 18, 2023. Sean Gallup/Getty Images.

The center of Arabic-speaking life in Berlin is Neukoelln’s Sonnenallee, sometimes known to Germans as the “Arab Street.” The district has long been demonized — along with its neighboring Kreuzberg — by the German right. Recently, some have spoken of the district as a “little Gaza.” It was in Kreuzberg where a group of men handed out pastries to celebrate the Hamas attack. And the neighborhood since has been the site of various gatherings to show support for the people of Gaza under bombardment — and several confrontations with police. On October 18, an officer in riot gear stamped out tea lights at a vigil for those killed in an explosion at the Al-Ahli Arab Hospital. Later that night, parts of the street were on fire — in what Bild called a riot.

Since October 7, police have arrived most nights in riot gear, patrolling in force. On October 23, in just the two blocks between the restaurants Risa Chicken and Konditorei Damascus, I counted more than two dozen officers in full suits of riot armor and eight police vans. At the corner of Pannierstrasse, I spotted a group of six police who had detained eight people. “They tried to cross the street when it was red,” a man said to me, smiling in disbelief, pointing to two of the men in custody, who could be described as vaguely Middle Eastern, standing against the wall. “Can you believe it?” a woman with a gray hair covering exclaimed, nearly leaping with indignation. “How can you hold them for that?”

As a crowd gathered, a pair of teenagers walked past, one wearing a puffer jacket, the other in a Puma sweatshirt. As the signal turned green and they stepped onto the crosswalk, I heard one of them say to the other, “Artikel 8: Grundgesetz.” Article 8 of the Basic Law.

I had just heard that phrase for the first time earlier that evening. A protester in Hermannplatz, the square that lies at the mouth of Sonnenallee, had been reading out that very section of the Grundgesetz, which is the German constitution. Article 8 says, “All Germans have the right — without having to register or receive permission — to assemble peacefully, without weapons.”

The teenagers might have misread the situation. After all, the police were not detaining these men because they were protesting, but rather were arbitrarily detaining them for the minor infraction of jaywalking.

Riot police officers arrest a demonstrator at Hermannplatz, Berlin on October 11, 2023 at a pro-Palestinian gathering. John MacDougall /AFP via Getty Images.

“Why is everyone speaking now about Article 8?” Clemens Arzt, a professor of constitutional and administrative law at the Berlin School of Economics and Law, repeated my question before answering. “Because every half-educated person knows that Article 8 protects the freedom of assembly.”

Germany, he explained to me, recognizes assembly and speech as two distinct rights, as opposed to the First Amendment of the U.S. Constitution where they are intertwined. In Germany, Article 5 deals with freedom of speech and Article 8 with freedom of assembly. The practice of shutting down protests before they even begin really began with the pandemic, said Arzt, “when we preemptively implemented bans on gatherings at a mass scale.”

I mentioned to Arzt how I have repeatedly seen police demand that protesters put away their Palestinian flags. Is this legal? Arzt said that the police are given broad latitude to make these decisions, but only in the case of “imminent danger” to public safety — something that October’s demonstrations did not often entail. But he suggested that making these decisions on the spot can be so difficult for the police, that one reason for the bans might have been that it was simply easier for them to pull the plug completely despite questions about legality. 

The second reason for the bans, he said, has to do with Germany’s relationship with Israel. These protests are being broken up in the name of “Staatsraison.” While recognizing Germany’s important relationship with Israel, Arzt sees this current application as a problem. “It appears to me,” he said, “that, partially, the basic idea of the protection of Israel — this Staatsraison — results in taking priority over gatherings that cannot, actually, from a sober legal perspective be disbanded or forbidden.”

Participants at a pro-Israel rally gathered at Rosa-Luxemburg-Platz in Berlin on October 29, 2023. Christoph Soeder/picture alliance via Getty Images.

“If you meet 20 people or if you meet 10,000, the empowerment you feel after a big demonstration is a whole different level,” the Palestinian doctor Nazir told me with a grimace. “And Germany knows exactly that. And that is why Germany is banning the protests.”

“They fear the growing rise of solidarity happening in Berlin.”

Nazir has been in Berlin for most of his adult life, where he has cared for the sick, paid his taxes and participated in Palestine Speaks, an antiracist advocacy group dedicated to Palestinian rights. Since October 7, he has lost 19 members of his extended family to Israeli bombs. He wakes up every day, he told me, hoping that his parents and sister in Gaza remain unharmed. “This is the question with which I wake up every day,” he said, “and hope that answer is still ‘yes, they are alive.’”

“It’s one of the most schizophrenic situations I have found myself in,” he said. “I am good enough to pay taxes and to work in a hospital, to do intensive care and to hold the hand of grieving people and to give hope and optimism to parents and their children that we are going to overcome their health crises.” All of this, he said, “while you are dehumanized and while you are expecting every minute to get a note that your family does not exist.”

When we spoke, Palestine Speaks had begun to register their protests with more generic names like “Global South United”; that particular demonstration ended up drawing around 11,000 participants, one of the largest pro-Palestinian rallies in German history. Still, even when the protests happen, the police seek to disrupt them, Nazir said. He told me about a protest the previous weekend at Oranienplatz called “Decolonize. Against Oppression Globally.” There, he said the police had removed their speakers after the police translator misinterpreted a statement. Still, he said, it was a relief to feel the support of so many people during a time when the environment in Germany has become so deeply anti-Muslim.

“They are making house raids,” Nazir said of the German police, an assertion echoed by other activists with whom I spoke, who noted that referring to the events of October 7 as “resistance” online could result in a visit from the police. He emphasized how Germany’s treatment of Palestinians is only one part of the nation’s rightward shift, and how the current wave of anti-Arab and anti-Palestinian discourse is a symptom of Germany’s failure to learn from its past. “The most important question is not what’s happening toward Palestinians alone.”

“Germany needs Israel as a replacement nationality,” he said, referring to the idea of German identification with Israel as a nationality that Germany can feel unrestrainedly proud of. He cautioned that Germany also needs Israel to be “rehabilitated in the international community.” “Israel is the so-called proof that Germany learned a lesson from its history and that the denazification was a successful process.”

“But let’s be honest and point out the elephant in the room,” said Nazir. “The second biggest party in Germany is the AfD.”

Pro-Palestinian demonstrators gathered in Cologne, Germany on October 20, 2023. Hesham Elsherif/Getty Images.

The Alternative for Germany party, the far-right party notorious for its Islamophobia and xenophobia, has consistently received 20% of German support in polls, second only to the right-drifting Christian Democratic Union.  

“It seems like everyone is really just trying to compete with the AfD at the moment,” said Wieland Hoban, a noted composer and chairman of Jewish Voice for a Just Peace in the Middle East, an anti-Zionist Jewish organization. He described the situation in Germany as having turned starkly to the right.

“The biggest warriors against antisemitism,” Hoban told me, “are conservatives and right-wingers who are doing that because they’re using antisemitism just to live out their anti-migrant racism by saying ‘OK, all these Muslims and Arabs are antisemites so let’s deport them all in order to fight antisemitism.’”

German society’s hypocrisy is exposed, suggested Hoban, in its tolerance of antisemitism among those who are already recognized as Germans. Hoban cited Hubert Aiwanger, a far-right politician and former schoolteacher in Bavaria, who was found to have distributed antisemitic and pro-Nazi pamphlets in his youth and only became more popular because of it, which he spun as a victory over “cancel culture.”

Hoban, disclosing the many instances of “police thuggery” he has witnessed while on the streets in recent weeks, argues that the presence of Palestinians is an inconvenient truth for German memory culture. “It’s just kind of obvious that any human, depending on their situation, can be a victim or a perpetrator,” said Hoban. “But it’s unbearable for some Germans, this idea that the Jews could have been their victims. But then in another context,” he said, referring to Jews, “we’re perpetrators.”

A Shabbat table with 220 empty chairs, representing the 220 Israeli hostages of Hamas, during a solidarity event organized by a Jewish congregation in Berlin’s Charlottenburg-Wilmersdorf district on October 27, 2023. Christoph Soeder/picture alliance via Getty Images.

Esra Ozyurek, a professor of sociology at the University of Cambridge, understands the difficulty people have in dealing with the mutability of roles when it comes to the highly emotive topic of memory culture, with “coming to terms with the past.” She described how the issue of memory politics often devolves into a competition, “a little bit like supporting teams in a soccer match.”

“I was at a talk,” she told me, “and then a young woman came to me and said, ‘I read your work, but I’m on team Israel.’ I said, ‘Wow, I’m not on any team.’”

Rather than thinking tribally, the broader ethical question is, she emphasized, “how we can live in a plural society, how we can deal with difference.”

Germany, she said, is hardly alone in its marginalization and repression of its minorities — even if its pretext for doing so is unique. This is typical of “big nationalist projects,” she said. “It is always their fear that the minorities find comfort in each other, and then they unite. So this big nationalist project is always about dividing the minorities and making them enemies of each other. This is not the first time this is happening. It is just so sad that is happening in the name of fighting a form of racism.”

Ozyurek described how German society sees Muslims as the carriers of German antisemitism— a view that draws its support from German scholarship that claims antisemitism was exported to the Muslim world first by 19th-century missionaries and then by the Nazis in the 20th century. Meanwhile, Germany, by accepting its responsibility for the Holocaust, has become a modern, tolerant democratic nation. “It’s a very Christian narrative,” she said. “You start with your guilt and then you come to terms with it. You accept it, and then you’re liberated.”

Germans expect the Turkish and Arab minority to relate to the history of the Holocaust by identifying with the German majority and thus work through the guilt of what is called “the perpetrator society.” Like Germans, they are supposed to find ancestors to atone for — like the Grand Mufti of Jerusalem, a Nazi collaborator — in order to be accepted as full members of German society.

But, of course, the Muslim minority does not follow the German script. “Everyone relates to the story from where they are standing,” said Ozyurek. “They relate to it as minorities.”

Palestinians are not only a minority in Germany, but many of them came to Germany stateless as refugees. In the eyes of mainstream Germany, however, these conditions are disregarded as “self-victimization” — which places Palestinians in competition with Jews for the status of victim. “What is interesting,” Ozyurek said, referencing how Germans for many years believed themselves to be the real victim of World War II, “is that the qualities that are attributed to them are also qualities Germans have gotten over.”

“It’s just a Catch-22 situation,” said Ozyurek. “If you don’t have the Nazi ancestors, then how are you going to apologize for their crimes?” She added, “if they cannot join the national conversation, how can they feel they belong?”

The post The crackdown on pro-Palestinian gatherings in Germany appeared first on Coda Story.

]]>
Will a new regulation on AI help tame the machine? https://www.codastory.com/newsletters/artificial-intelligence-bias-regulation/ Fri, 03 Nov 2023 13:05:02 +0000 https://www.codastory.com/?p=47978 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Gazans face an internet blackout, and mobile spyware strikes again in India.

The post Will a new regulation on AI help tame the machine? appeared first on Coda Story.

]]>
About a year ago, police outside Atlanta, Georgia, pulled over a 29-year-old Black man named Randal Reid and arrested him on suspicion that he had committed a robbery in Louisiana — a state that Reid had never set foot in. After his lawyers secured Reid’s release, they found telltale signs that he’d been arrested due to a faulty match rendered by a facial recognition tool. 

As revealed by The New York Times, the Louisiana sheriff’s office that had ordered Reid’s arrest had a contract with Clearview AI, the New York-based facial recognition software company that allows clients to match images from surveillance video with the names and faces of people they wish to identify, drawing on a database containing billions of photos scraped from the internet. Reid spent six days in jail before authorities acknowledged their mistake.

Reid is just one among a growing list of people in the U.S. who have been through similar ordeals after police misidentified them using artificial intelligence. In nearly all reported cases, the people who were targeted are Black, and research has shown over and over again that these kinds of software tend to be less accurate when they try to identify the faces of people with darker skin tones. Yet police in the U.S. and around the world keep using these systems — because they can.

But there’s a glimmer of hope that the use of technology by law enforcement in the U.S. could start to be made more accountable. On Monday, the White House dropped an executive order on “safe, secure and trustworthy” AI, marking the first formal effort to regulate the technology at the federal level in the U.S.

Among many other things, the order requires tech companies to put their products through specific safety and security tests and share the results with the government before releasing their products into the wild. The testing process here, known as “red teaming,” is one where experts stress test a technology and see if it can be abused or misused in ways that could harm people. In theory at least, this kind of regime could put a stop to the deployment of tools like Clearview AI’s software, which misidentified Randal Reid.

If done well, this could be a game changer. But in what seems like typical U.S. fashion, the order feels more like a roadmap for tech companies than a regulatory regime with hard restrictions. I exchanged emails about it with Albert Fox Cahn, who runs the Surveillance Tech Oversight Project. From his standpoint, red teaming is no way to strike at the roots of the problems that AI can pose for the public interest. “There is a growing cadre of companies that are selling auditing services to the highest bidder, rubber stamping nearly whatever the client puts forward,” he wrote. “All too often this turns into regulatory theater, creating the impression of AI safeguards while leaving abusive practices in place.” Fox Cahn identified Clearview AI as a textbook example of the kinds of practices he’s concerned about.

Why not ban some kinds of AI altogether? This is what the forthcoming Artificial Intelligence Act will do in the European Union, and it could be a really good model to copy. I also chatted about it with Sarah Myers West, managing director of the AI Now Institute. She brought up the example of biometric surveillance in public spaces, which soon will be flat-out illegal in the EU. “We should just be able to say, ‘We don’t want that kind of AI to be used, period, it’s too harmful for the public,’” said West. But for now, it seems like this is just too much for the U.S. to say.

GLOBAL NEWS

The internet went dark in Gaza this past weekend, as Israeli forces began their ground invasion. More than 9,000 people have already been killed in nearly a month of aerial bombardment. With the power out and infrastructure reduced to rubble, the internet in Gaza has been faltering for weeks. But a full-on internet shutdown meant that emergency response crews, for instance, were literally just racing towards explosions wherever they could see and hear them, assuming that people would soon be in need of help. U.S. senior officials speaking anonymously to The New York Times and The Washington Post said they had urged Israeli authorities to turn the networks back on. By Sunday, networks were online once again.

Elon Musk briefly jumped into the fray, offering an internet hookup to humanitarian organizations in Gaza through his Starlink satellite service. But as veteran network analyst Doug Madory pointed out, even doing this would require Israel’s permission. I don’t think Musk is the best solution to this kind of problem — or any problem — but satellite networks could prove critical in situations like these where communication lines are cut off and people can’t get help that they desperately need. Madory had a suggestion on that too. Ideally, he posted on X, international rules could mandate that “if a country cuts internet service, they lose their right to block new entrants to the market.” Good idea.

Opposition politicians and a handful of journalists in India have become prime surveillance targets, says Apple. Nearly 20 people were notified by the company earlier this week that their iPhones were targeted in attacks that looked like they came from state-sponsored actors. Was Prime Minister Narendra Modi’s Bharatiya Janata Party behind it? It’s too soon to say, but there’s evidence that the ruling government has all the tools it needs to do exactly that. In 2021, the numbers of more than 300 Indian journalists, politicians, activists and researchers turned up on a leaked list of phones targeted with Pegasus, the notoriously invasive military-grade spyware made by NSO Group. At Coda, we reported on the fallout from the extensive surveillance for one group of activists on our podcast with Audible.

WHAT WE’RE READING

  • My friend Ethan Zuckerman wrote for Prospect magazine this week about the spike in disinformation, new measures that block researchers from accessing social media data, and lawsuits targeting this type of research. These factors, he says, are taking us to a place where what happens online is, in a word, “unknowable.”
  • Peter Guest’s excellent piece for Wired about the U.K.’s AI summit drolly described it as “set to be simultaneously doom-laden and underwhelming.” It’s a fun read and extra fun for me, since Pete will be joining our editorial team in a few weeks. Keep your eyes peeled for his stuff, soon to be coming from Coda.

The post Will a new regulation on AI help tame the machine? appeared first on Coda Story.

]]>
The movement to expel Muslims and create a Hindu holy land https://www.codastory.com/rewriting-history/the-movement-to-expel-muslims-and-create-a-hindu-holy-land/ Thu, 02 Nov 2023 09:57:20 +0000 https://www.codastory.com/?p=47370 In the mountains of Uttarakhand, a northern Indian state revered by Hindu pilgrims, a campaign to drive out Muslims is underway

The post The movement to expel Muslims and create a Hindu holy land appeared first on Coda Story.

]]>
Late on a hot night this summer, Mohammad Ashraf paced around his house, wondering if the time had finally come for him to flee his home of 40 years. Outside his window lay the verdant slopes of the Himalayas. All of Purola, a small mountain village in the northern Indian state of Uttarakhand, appeared to be asleep, tranquil under the cover of darkness. But Ashraf was awake. Could he hear noises? Were those footsteps beneath his window? Did his neighbors mean to do him harm?

“I was very afraid,” Ashraf said. “My kids were crying.”

Why did we write this story?

Prime Minister Narendra Modi’s government is working steadily to transform India from a secular democracy into a Hindu nation at the expense of minorities, particularly Muslims.

Since May 29, there had been unrest in Purola. The local chapter of India’s governing Bharatiya Janata Party, along with several other right wing Hindu nationalist groups, had staged a rally in which they demanded that local Muslims leave town before a major Hindu council meeting scheduled for June 15. On June 5, Ashraf’s clothing shop, like the shops of other Muslim traders, was covered with posters that warned “all Love Jihadis” should leave Purola or face dire consequences. They were signed by a Hindu supremacist group called the “Dev Bhoomi Raksha Abhiyan,” or the Movement to Protect God’s Land.

The rally in Purola was the culmination of anti-Muslim anger and agitation that had been building for a month. Earlier in May, two men, one Muslim and one Hindu, were reportedly seen leaving town with a teenage Hindu girl. Local Hindu leaders aided by the local media described it as a case of “love jihad,” a reference to the conspiracy theory popular among India’s Hindu nationalist right wing that Muslim men are seeking to marry and convert Hindu women to Islam. Public outrage began to boil over. The men were soon arrested for “kidnapping” the girl, but her uncle later stated that she had gone willingly with the men and that the charges were a fabrication.

It mattered little. Hindu organizations rallied to protest what they claimed was a spreading of love jihad in the region, whipping up the frenzy that had kept Ashraf’s family up at night, fearing for their safety.

Purola main market.

What is happening in Uttarakhand offers a glimpse into the consequences of the systematic hate campaigns directed at Muslims in the nine years since Narendra Modi became prime minister. Hindu nationalists believe that the Hindu-first ideology of the government means they have the support necessary to make the dream of transforming India into a Hindu rather than secular nation a reality. Muslims make up about 14% of the Indian population, with another 5% of the Indian population represented by other religious minorities including Christians. In a majoritarian Hindu India, all of these minorities, well over 250 million people, would live as second-class citizens. But it is Muslims who have the most to fear.

Not long after the events in Purola, Modi would go on a highly publicized state visit to the United States. “Two great nations, two great friends and two great powers,” toasted President Joe Biden at the state dinner. The only discordant note was struck at a press conference — a rarity for Modi who has never answered a direct question at a press conference in India since he became prime minister in 2014. But in Washington, standing alongside Biden, Modi agreed to answer one question from a U.S. journalist. The Wall Street Journal’s Sabrina Siddiqui was picked. “What steps are you and your government willing to take,” she asked Modi, “to improve the rights of Muslims and other minorities in your country and to uphold free speech?”

In his answer, Modi insisted that democracy was in the DNA of India, just as it was in the U.S. For daring to ask the question, Siddiqui was trolled for days, the victim of the sort of internet pile-on that has become a familiar tactic of the governing BJP and its Hindu nationalist supporters. In the end, a White House spokesperson, John Kirby, denounced the harassment as “antithetical to the principles of democracy.”

Modi has received warm, enthusiastic welcomes everywhere from Sydney and Paris to Washington. In every country he visits, Modi talks up India as a beacon of democracy, plurality and religious tolerance. But as India prepares for elections in 2024, and Modi expects to return to office for a third consecutive five-year term, the country is teetering between its constitutional commitment to secular democracy and the BJP’s ideological commitment to its vision of India as a Hindu nation.   
In a sharply worded critique of Modi’s state visit to the U.S., author Arundhati Roy, writing in The New York Times, noted that the State Department and the White House “would have known plenty about the man for whom they were rolling out the red carpet.” They might, she wrote, “also have known that at the same time they were feting Mr. Modi, Muslims were fleeing a small town in northern India.”

Indian Prime Minister Narendra Modi answering a question at a press conference in Washington, DC, while on a state visit to the U.S. in June. Win McNamee/Getty Images.

Roy was referring to the right wing Hindu rallies in Uttarakhand. On May 29, a thousand people marched across Purola, chanting “Jai Shri Ram” — a phrase once used as a greeting between observant Hindus that has in the recent past become a battle cry for Hindu nationalists. During the rally, the storefronts of Muslim-run shops were defaced and property was damaged. The police, walking alongside the mob, did nothing to stop the destruction. Several local BJP leaders and office-bearers participated in the march. A police official later told us that the rally had been permitted by the local administration and the town’s markets were officially shut down to allow for the demonstrations.

As the marchers advanced through the town’s narrow lanes, Ashraf said they intentionally passed by his home. His family, one of the oldest and most well-established Muslim families in Purola, has run a clothing shop in Purola for generations. Ashraf was born in the town and his father moved to Purola more than 40 years ago. 

“They came to my gate and hurled abuse,” he said. “Drive away the love jihadis,” the crowd screamed. “Drive away the Muslims.” 

Among the slogans was a particularly chilling one: “Muslim mukt Uttarakhand chahiye.” They wanted an Uttarakhand free of Muslims, they said in Hindi. A call, effectively, for ethnic cleansing. 

Ashraf’s three young children watched the demonstration from their window. “My 9-year-old,” he told us, “asked, ‘Papa, have you done something wrong?’”

Forty Muslim families fled Purola, a little under 10% of its population of 2,500 people. Ashraf’s was one of two families who decided to stay. “Why should I leave?” he asked. “Everything I have is here. This is my home. Where will I go?”

Mohammad Ashraf, whose clothing store was vandalized by Hindu nationalists in Purola in June and covered with posters warning Muslims to leave town.

The campaign in Purola spread quickly to other parts of the state. On June 3, a large rally took place in Barkot, another small mountain town in Uttarakhand, about an hour’s drive from Purola. Thousands marched through the town’s streets and neighborhoods as a loudspeaker played Hindu nationalist songs. “Har Ghar Bhagwa Chhayega, Ram Rajya Ab Aayega” — Every House Will Fly the Hindu Flag, Lord Ram’s Kingdom Is Coming. 

Muslim shopkeepers in the town’s market, like the Hindu shopkeepers, had pulled their shutters down for the day, anticipating trouble at the rally. As the mob passed by the shops, they marked each Muslim-run shop with a large black X. The town’s Muslim residents estimate that at least 43 shops were singled out with black crosses. Videos taken at the rally, shared with us, showed the mob attacking the marked-up Muslim shops to loud cheers from the crowd. The police stood by and watched. 

One Muslim shopkeeper, speaking anonymously for fear of retribution, described arriving at his shop the next day and seeing the large black cross. “My first thought was ‘Heil Hitler,’” he said. “I have read Hitler’s history. That’s how he had marked out Jews. It is the same strategy. That’s how we are being identified.”

We spoke to dozens of people who identify with and are members of Hindu nationalist parties, ranging from Modi’s BJP to fringe, far-right militant groups such as the Bajrang Dal, analogous in some ways to the Proud Boys. Again and again, we were told that just as “Muslims have Mecca and Christians have the Vatican,” Hindus need their own holy land. Uttarakhand, home to a number of important sites of pilgrimage, is, in this narrative, the natural home for such a project —if only, the state could rid itself of Muslims, or at the very least monitor and restrict their movement and forbid future settlement. Nearly 1.5 million Muslims currently live in Uttarakhand, about 14% of the state’s entire population, which exactly reflects the proportion nationally. 

Hindu nationalists told us how they are working to create and propagate this purely Hindu holy land. Their tactics include public rallies with open hate speech, village-level meetings and door-to-door campaigns. WhatsApp, Facebook and YouTube are essential parts of their modus operandi. These were tools, they said, to “awaken” and “unite” Hindus. 

Their attempts to portray Muslims as outsiders in Uttarakhand dovetails with a larger national narrative that Hindus alone are the original and rightful inhabitants of India. The BJP’s ideological parent, the Rashtriya Swayamsevak Sangh, founded in 1925, argues that India is indisputably a “Hindu rashtra,” a Hindu nation, nevermind what the Indian constitution might say.

With a population of 11.5 million, Uttarakhand stretches across the green Himalayan foothills. It is a prime tourist destination known for its imposing mountains, cascading white rivers and stone-lined creeks. It is home to four key Hindu pilgrimage sites — the sources of two holy rivers, the Ganges and the Yamuna; and Kedarnath and Badrinath, two temples dedicated to the Hindu gods Shiva and Vishnu respectively. Together, these four sites, high up in rugged mountain terrain, form a religious travel circuit known as the Chota Char Dham. According to state government figures, over 4 million pilgrims visited these sites in 2022 alone. Downhill, Haridwar, a town on the banks of the Ganges, is of such spiritual significance that Hinduism’s many seers, sages and priests make it their home. For Hindus in north India, Uttarakhand is the center of 4,000 years of tradition.

The state of Uttarakhand is also one of India’s newest — formed in November 2000, carved out of Uttar Pradesh, a huge, densely populated north Indian state. Its creation was the result of a long socio-political movement demanding a separate hill state with greater autonomy and rights for its many Indigenous peoples, who form just under 3% of the state’s population and are divided into five major tribal groups. These groups are protected by the Indian constitution, and their culture and beliefs are distinct from mainstream Hindu practice. But over the last decade, Uttarakhand has seen its identity shift from a mountain state created to better represent its Indigenous population to one molded and marketed primarily as “Dev Bhoomi,” a sacred land for Hindus. 

Since becoming prime minister, Modi has made at least six trips to the state’s key pilgrimage sites, each time amidst much hype and publicity. In May 2019, in the final stages of the month-long general election, Modi spent a day being photographed meditating in a remote mountain cave, less than a mile from the Kedarnath shrine. Images were beamed around the country of Modi wrapped in a saffron shawl, eyes closed, sitting cross-legged atop a single wooden bed. The symbolism was not lost on Hindus — the mountains and caves of Uttarakhand are believed to be the abode of the powerful, ascetic Shiva, who is often depicted in deep meditation on a mountain peak. 

Like other Muslims in Purola, Zahid Malik, who is a BJP official, was also forced to leave his home. We met him in the plains, in the town of Vikasnagar, to where he had fled. He said Hindus had threatened to set his clothing shop on fire. “If I, the BJP’s district head, face this,” he told us, “imagine what was happening to Muslims without my connections. For Hindus, all of us are jihadis.” 

Malik emphasized that Muslims have lived for generations in the region and participated in the creation of Uttarakhand. “We have been here since before the state was made,” Malik told us. “We have protested. I myself have carried flags and my people have gone on hunger strikes demanding the creation of this state, and today we are being kicked out from here like you shoo away flies from milk.”

For Malik, the irony is that it is members of his own party who want people like him out of Uttarakhand. 

Ajendra Ajay is a BJP leader and the president of the Badrinath Kedarnath Temple Committee, an influential post in a state dominated by the pilgrimage economy. “In the mountain regions, locals are migrating out,” he told us, “but the population of a certain community is increasing.” He means Muslims, though he offered no numbers to back his claims. Nationally, while the Muslim birth rate is higher than that of other groups, including Hindus, it is also dropping fast. But the supposed threat of Muslims trying to effect demographic change in India through population growth is a standard Hindu nationalist trope. 

“Uttarakhand is very sacred for Hindus and the purity of this land, its special religious and cultural character, should be maintained,” Ajay said. His solution to maintaining interreligious harmony is to draw stricter boundaries around “our religious sites” and to enforce “some restrictions on the entry of non-Hindus into these areas.”

Pilgrims gathered in front of the Badrinath temple in Uttarakhand, one of the four most sacred Hindu pilgrimage sites. Frank Bienewald/LightRocket via Getty Images.

On our way to Purola, the thin road snaking around sharp mountain bends, we stopped at another hill town by the Yamuna river. Naugaon is a settlement of approximately 5,000 people, many of whom are rice and potato farmers. The town’s center has a small strip of shops that sell clothes, sweets and medicines. In another era, it might have been possible to imagine a tiny, remote spot like this being disconnected from the divisive politics of the cities. But social media and smartphones mean Naugaon is no longer immune. While technology has bridged some divides, it has exacerbated others.

News of the public rallies in Purola in which Hindu supremacists demanded that Muslims either leave or be driven out spread quickly. In Naugaon, a new WhatsApp group was created. The group’s name, translated from Hindi, was “Hinduism is our identity.” By the end of June, it had 849 members. Deepak Rawat, a pharmacist in the Naugaon market, was among the participants. “People are becoming more radicalized,” he said approvingly, as he scrolled through posts on the group.

People we met in Naugaon told us there had already been a campaign in 2018 to drive Muslims away from this tiny rural outpost. “We chased them out of town,” they told us.

Sumit Rawat, a farmer in Nuagaon, described what happened. According to him, a young Hindu girl had been kidnapped by a Muslim waste-picker and was rescued by passersby who heard her cries for help. (We were not able to independently corroborate Rawat’s claims.) He told us that Hindus marched in protest at the attempted abduction. Their numbers were so great, said Rawat, that the rally stretched a mile down the market street. With little reporting of these incidents in the national press, people in cities are largely unaware of the rage that seethes in India’s rural towns and villages. “We want Muslims here to have no rights,” Rawat told us. “How can we trust any of them?”

Hindu nationalists in suburban Mumbai protesting in February against “love jihad,” a right wing conspiracy theory that claims Muslim men are luring Hindu women into marriage and converting them to Islam. Bachchan Kumar/Hindustan Times via Getty Images.

In Dehradun, the Uttarakhand capital, we met Darshan Bharti, a self-styled Hindu “saint” and founder of the “Dev Bhoomi Raksha Abhiyan,” or the Movement to Protect God’s Land. He was dressed in saffron robes and a string of prayer beads. The room in which we sat had swords hung on the orange walls. His organization was behind the posters pasted on shops in Purola owned by Muslims, ordering them to leave town. 

On June 7, with the anti-Muslim demonstrations in Purola still in the news, Bharti posted a picture on his Facebook page with Kumar, the state’s police chief. Even as Bharti spoke of inciting and committing violence, he dropped the names of several politicians and administrators in both the state and national governments with whom he claimed to be on friendly terms. In the room in which we met, there was a photograph of him with the current national security adviser, Ajit Doval, among a handful of figures believed to wield considerable influence over Modi. 

Bharti also claims to have met Pushkar Singh Dhami, the Uttarakhand chief minister, the highest elected official in the state, on several occasions. He has posted at least two pictures of these meetings on his social media accounts. He described Dhami as his disciple, his man. “All our demands, like dealing with love jihad and land jihad, are being met by the Uttarakhand government,” Bharti said. Land jihad is a right wing conspiracy theory that claims Muslims are illegally encroaching on Hindu land to build Muslim places of worship.  

We met Ujjwal Pandit, a former vice president of the BJP’s youth wing and now a state government functionary, at a government housing complex on the banks of the Ganges in Haridwar. It didn’t take long for him to claim that Muslims were part of a conspiracy to take over Uttarakhand through demographic force. In Uttarakhand, he said, guests were welcome but they had to know how to behave.
Pandit claimed, as have BJP leaders at state and national levels, that no Muslims had been forced to leave Purola, that those who left had fled on their own accord. As the red sun set behind us into the Ganges, he said quietly, “This is a holy land of saints. Sinners won’t survive here.”

The post The movement to expel Muslims and create a Hindu holy land appeared first on Coda Story.

]]>
Surviving Russia’s control https://www.codastory.com/rewriting-history/memorial-human-rights-group-russia-crackdown/ Mon, 30 Oct 2023 08:38:54 +0000 https://www.codastory.com/?p=47262 After being shut down by Russia’s Supreme Court, Memorial, the Nobel Peace Prize-winning rights group, is still operating in Russia, thanks to a survival strategy long in place.

The post Surviving Russia’s control appeared first on Coda Story.

]]>
In the final days of 2021, on the eve of the invasion of Ukraine, the Russian Supreme Court ordered Memorial, Russia’s oldest and largest human rights group, to be “liquidated.” On the day Memorial was awarded the 2022 Nobel Peace Prize, Russian authorities seized the organization’s Moscow offices.

Yet, nearly two years later, Memorial has not closed down. Its staff, led by mostly aging, bookish historians, have not just forestalled their demise but steered the organization to the razor’s edge of Russian political dissent.

It has no headquarters and no legal status in Russia. Its bank accounts are frozen and its programming has been pushed to the Moscow sidewalks. Yet, at a time when nearly all independent Russian media are operating in exile and Kremlin critics have been jailed, silenced or left the country, Memorial, in many ways, is roaring: publishing books, monitoring the ongoing trials of Ukrainian prisoners of war in Russia, offering free consulting to the relatives of people who disappeared during Soviet times on how to search archives for information, advocating for the growing list of political prisoners in Russia, and expanding its offices outside the country.

Why did we write this story?

When the Kremlin ordered Memorial to shut down, it fixed the perception of Russia as a country where political dissent has been wiped out. Memorial’s perseverance illustrates that the reality is more nuanced.

None of this is happening in the shadows. Memorial organizes regular “Topography of Terror” tours in Moscow, with one route going right up to the doorstep of Butyrka, one of Russia’s most notorious prisons during the Soviet era. The excursion ends with participants sitting down to write letters to the new generation of Russians imprisoned on politically motivated charges and awaiting trial inside the 250-year-old facility. Tickets sell out almost immediately.

“Our work could not stop for a single day,” historian and Memorial founding member Irina Scherbakova said.

Its annual “Returning the Names,” when people line up to read aloud the people killed by the Soviet regime, took place online on October 29 in cities across the world. Set up by the group in 2007, the event used to be held in front of the former KGB headquarters in Moscow, lasting twelve emotional hours but for the last few years, Moscow authorities have denied the group a permit.

While Memorial has worked under Kremlin intimidation for years, the war in Ukraine created an entirely new reality for an organization pursuing a mission to investigate Soviet-era crimes and expose present-day political abuses. In one of the most horrific recent cases highlighted by Memorial, Russian poet and activist Artyom Kamardin was raped with a dumbbell by law enforcement officers in September 2022 during a raid on his home after he posted a video online reciting an anti-war poem.

Memorial has withstood dismantling attempts thanks to a survival strategy put in place by its founders. Memorial is not a single organization, as its members like to remind the public, but a movement. Since its founding in 1987, the group has grown into a sprawling, decentralized network of organizations and individuals resilient against the Kremlin’s targeting.

There are more than 200 Memorial members and volunteers working globally, with just under a hundred left in Russia. With each local branch registered independently, it would take 25 separate court cases to entirely shut down the network inside the country. There are satellite offices in Belgium, the Czech Republic, France, Germany, Israel, Italy, Lithuania, Sweden, Switzerland and Ukraine. Earlier this year, two shuttered Russia-based Memorial organizations re-registered outside the country under new names in Switzerland and France.

“From the very beginning we knew we didn’t want a hierarchy,” explained Scherbakova. “We always knew that this was a grassroots story. If there had been a hierarchy, Russia would have destroyed us a long time ago.”

A Memorial employee leaves Russia’s Supreme Court on December 14, 2021. Dimitar Dilkoff/AFP via Getty Images.

Memorial’s affiliate offices abroad have long been largely made up of local historians studying the Soviet period, but now many branches are absorbing staff that fled Russia.

The Prague office has become in the past 18 months a new headquarters of sorts. Today, the staff is a mix of Czechs and Russians. At the age of 70, the director of Memorial’s library, Boris Belenkin, fled Moscow for Prague last year. Belenkin calls the space a new “place for life” where Memorial workers can once again hold seminars, organize research fellowships and host visiting scholars.

From the Prague office, Memorial is also re-launching one of its most beloved programs: an essay-writing contest in which students in Russia were asked to delve into 20th century history. The contest had been run since 1999 in participating schools across 12 time zones before being called off in 2021. Finalists were flown out to Moscow to present their work at Memorial headquarters. For many students from far-flung regions, it was a once-in-a-lifetime opportunity to see their country’s capital. Over the years, schools dropped the program, caving to pressure from local officials and concerned, “patriotic-minded” parents.

Within Russia, pressure on staff continues to escalate. The director of Memorial’s branch in the Siberian city of Perm was arrested in May for “hooliganism” and has been in pre-trial detention ever since. Offices in Yekaterinburg and other cities face routine harassment and arbitrary fines from local authorities, pushing some to the verge of closing. A prominent Memorial historian, Yuri Dmitriev, is currently serving a 15-year sentence at a prison in what Memorial says is a politically motivated case. Both men are currently being held in facilities that were once part of the Soviet Gulag camp system.

In Moscow, nine Memorial members including Alexandra Polivanova, a programming director who leads the Butyrka prison tour, have become the targets of an ongoing criminal investigation. In May, authorities charged Memorial board member Oleg Orlov with “discrediting” the Russian military, a new crime in Russia that can carry a prison sentence of up to five years. In court in September, Orlov was asked to defend his denouncement of the war in Ukraine as well as his career documenting human rights abuses for Memorial in Chechnya and the wider Caucasus region, as well as in Nagorno-Karabakh and Ukraine. On October 11, the court found Orlov guilty and fined him. The government prosecutor requested that Orlov undergo a mental health evaluation, citing his “heightened sense of justice, lack of self-preservation instincts, and posturing before citizens.”

Oleg Orlov lays flowers at the monument for the victims of political repressions in front of FSB headquarters in Moscow on October 29, 2023. Alexander Nemenov / AFP via Getty Images.

Memorial believes the criminal cases against Moscow staff are motivated by their ongoing advocacy for political prisoners in Russia. Memorial Center, which is the organization’s human rights branch, runs a database of people imprisoned under politically motivated charges and is often cited by international organizations. It also publishes regular updates on the prisoners and their cases, features interviews with their family members and organizes letter writing campaigns. Today, there are 609 people on Memorial’s list — a number that has tripled in the past five years.

Scherbakova, Memorial’s director and a historian of the Soviet Union, says this number is higher than during the late stages of the Soviet Union.

“In my opinion, today’s situation is much scarier and crueler,” said Scherbakova.

Memorial has been in the Kremlin’s crosshairs since it condemned Russia’s invasion and occupation of Crimea and other territories in eastern Ukraine in 2014. The government’s most powerful legal tool is the Foreign Agents Act, legislation designed to pressure groups and individuals who receive funding from outside the country. Passed in 2012 and expanded in 2020, the law imposes up to five years of imprisonment for failing to comply with an exhaustive system of tedious financial reporting and bureaucracy.

Russian authorities have also used the foreign agents law to target  individuals. In mid-October, Russian police detained Alsu Kurmasheva, a Prague-based journalist at Radio Free Europe with dual Russian-American citizenship, for failing to register as a foreign agent when she traveled to Russia for a family emergency. If convicted, Kurmasheva faces up to five years in prison.

Authoritarian leaders around the world have since adopted similar legislation to quash dissent at home.

“Today, being a spy, a counter-revolutionary, a Trotskiest, all of that has been folded into the term ‘foreign agent,’” said Belenkin, the Memorial library director and a founding member of Memorial who was added to the Kremlin’s foreign agents list in 2022.

In 2021, the government brought Memorial before the Supreme Court, alleging that it had violated the law by failing to label a handful of social media posts with boilerplate text disclosing that Memorial is classed as a foreign agent. But by the closing argument, prosecutors dropped any pretense of holding Memorial accountable for a few unlabeled social media posts. Instead, the general prosecutor, Alexei Zhafyarov, took to the floor to dramatically rail against the group.

“Memorial speculates on the topic of political repression, distorts historical memory, including about World War II, and creates a false image of the Soviet Union as a terrorist state,” said Zhafyarov, mocking Memorial for “claiming to be the conscience of the nation.”

“Why, instead of being proud of our country, are we being told we must repent for our past?” Zhafyarov asked the courtroom.

The “Returning the names” ceremony organized by Memorial in front of the former KGB headquarters, now home to the FSB, on October 29, 2016. Kirill Kudyravtsev /AFP via Getty Images.

Russia’s Supreme Court is led by Chief Justice Vyacheslav Lebedev, who began his career sending anti-Soviet dissidents to Gulag camps in the 1980s and managed to stay in power following the collapse of the USSR — one of many Soviet officials who survived the transition to democracy.

Grigory Vaypan, part of Memorial’s defense team, said that ultimately this was an opportunity to expose the government’s real motivation for bringing the group to court and state for the historical record what Memorial’s closing was really about. “Zhafyarov rose, and instead of telling us about those posts on Twitter and Instagram, he said, ‘We should close Memorial because Memorial is pursuing a narrative that is not in the interest of the state,’” said Vaypan. “They needed to close Memorial because Memorial messed with the government’s narrative that ‘we, the Russian state, the state that won the Second World War, are unaccountable to the world.’”

“Re-reading the closing argument now makes much more sense to me than it did back then,” said Vaypan. “What the prosecutor said was a prologue to the war.”

Memorial lost an appeal in the Supreme Court in March 2022 as Russian troops marched to Kyiv. The war has left members asking themselves the same question that is echoing across Russian civil society: How did things go so wrong?

At Memorial, an initiative dedicated to preventing the return of totalitarianism to Russia, the invasion of Ukraine has led to a difficult, at times contentious, internal re-examination of its own legacy.

“We’re trying to understand what wasn’t right in our work over the past 35 years: How we didn’t build up cooperation with Russian society, how we failed to see different, more complex forms of discrimination and oppression,” Polivanova, the programming director, said. “We had blind spots in our work to the point where, in a sense, we all allowed this terrible war to happen.”

There was a mixed global reaction last year when the Nobel committee announced that the 2022 Nobel Peace Prize would be shared among Memorial, the Ukrainian Center for Civil Liberties and Ales Bialiatski, a human rights advocate from Belarus. The director of the Ukrainian organization Oleksandra Matviichuk praised Memorial’s work but refused to be interviewed alongside Yan Raczynski, who accepted the award for Memorial in Oslo. Ukraine’s ambassador to Germany called the shared recognition “truly devastating” in the context of the ongoing war, launched by Russia in part from Belarusian territory.

Natalia Pinchuk on behalf of her husband, jailed Belarusian activist Ales Bialiatski, Yan Rachinsky of Memorial and the head of the Ukrainian Center for Civil Liberties, Oleksandra Matviichuk, pose with their Nobel Peace Prize medals in Oslo on December 10, 2022. Sergei Gapon / AFP via Getty Images.

Not everyone at Memorial thinks the group should be judged through the lens of Russia’s war and hard turn towards authoritarianism.

“Without question, a medium-sized organization, with limited resources, and even with our network, could not change anything,” said Belenkin, director of Memorial’s library, in regards to the war. “Memorial is not relevant here.”

But Polivanova, who operates the tours and is a generation younger than much of Memorial’s leadership, believes that Memorial must re-examine its own legacy in connection to the war. The ongoing discussion among Memorial members on this topic has been “very difficult,” she said. She has reworked her tour lineup, with one of the new Moscow excursions dedicated to the Ukrainian human rights activist Petro Grigorenko.

Born in a small village in Ukraine’s Zaporizhzhya region in what was then the Russian empire, Grigorenko rose through the ranks of the Soviet Army to become a World War II hero and a major general. At the height of his career in 1968, Grigorenko broke with the Soviet Army by speaking out against the invasion of Czechoslovakia during the Prague Spring. Punishment came swiftly: He was arrested in Moscow, diagnosed as criminally insane and underwent punitive psychiatric treatment, a practice that has re-emerged under President Vladimir Putin. Somehow, Grigorenko managed to continue speaking out for the cause of long-persecuted Crimean Tatars, dared to criticize the Soviet narrative of the Second World War, and founded the Moscow and Ukrainian Helsinki Groups before being exiled.

“In the past, we didn’t consider this story to be so important,” Polivanova said. “This historical perspective was not stressed at Memorial.”

The updated tour lineup that includes Grigorenko’s life in Moscow has had a surge in popularity since the full-scale invasion of Ukraine. Over the past year and a half, Polivanova has had to triple the number of weekly walking tours and still isn’t able to keep up with demand. Registration fills up almost immediately after dates are announced.

The tours are one of the rare public forums available to Russians to discuss the war. “People are really engaging,” Polivanova said. In September 2022, she added readings of Ukrainian poetry written by authors killed during Stalin’s purges to a tour of a mass grave site in Russia’s northeast. On many excursions, participants start to take over, she said, drawing direct comparisons between the cruelty of Soviet repression and news of Russian atrocities in Bucha, Mariupol and other frontlines in Ukraine.

The tours have also attracted a different kind of participant. “Patriotic” activists crashed the organized outings for weeks at a time last fall, threatening those in attendance and publicly denouncing members of Memorial as “traitors.” Since then, Memorial started to require that participants provide links to their social media accounts when registering for a tour.

As people line up for Memorial’s tours, the government’s attempts to reverse many of Memorial’s decades-long efforts to seek accountability for crimes committed under communism remain relentless.

In September, the Russian Foreign Intelligence Service debuted in front of their offices a looming statue of Felix Dzerzhinsky, who founded the infamous Soviet political police apparatus. The statue was almost an exact copy of a Dzerzhinsky monument that stood for decades in front of the Moscow headquarters of the KGB, the Soviet Union’s secret police and intelligence agency. In 1991, Russians who had gathered to protest for an end to totalitarian Soviet rule and a transition to democracy tore it down. Today, the spymaster, ally of Lenin and Stalin, architect of the Red Terror, stands again in Moscow.

The post Surviving Russia’s control appeared first on Coda Story.

]]>
How the new UK tech law hurts Wikipedia https://www.codastory.com/newsletters/better-internet-wikipedia/ Thu, 26 Oct 2023 18:01:51 +0000 https://www.codastory.com/?p=47486 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us. Also in this edition: Meta keeps mistreating content from Palestine, Venezuelans cast primary ballots (despite censorship) and Apple has a problem with Jon Stewart.

The post How the new UK tech law hurts Wikipedia appeared first on Coda Story.

]]>
It has been an incredibly difficult three weeks in the world, and the internet shows it. In the last couple of newsletters, I’ve noted just how hard it is to find reliable information on the social web right now, where everything seems to revolve around attention, revenue and shock value, and verified facts are few and far between. So this week, I’m turning my attention to a totally different part of the internet: Wikipedia. 

It’s been on my mind lately because of the proposed new online safety law in the U.K. that will set strict age requirements for young people online and require websites to scan and somehow remove all content that could be harmful to kids before it appears online. In a recent blogpost for the Wikimedia Foundation — the non-profit that supports Wikipedia — Vice President for Global Advocacy Rebecca MacKinnon wrote that by requiring sites to scan literally everything before it gets posted, the bill could upend the virtual encyclopedia’s bottom-up approach to content creation. As she put it, the law could destroy Wikipedia’s system “for maintaining encyclopedic integrity.”

You may be wondering precisely what “encyclopedic integrity” means at Wikipedia, where the article on the Marvel Comics character Spider-Man cites almost twice as many sources as the article for the Republic of Chad, a country of an estimated 18.5 million people. I get it. Wikipedia, by its own admission, has had problems with an overrepresentation of the interests of nerdy white male American 20-somethings who have too much time on their hands. But these people also really care about what they post online, and they have created an effective cooperative system for collecting, verifying and building knowledge. The system is totally dependent on the good will of thousands of contributors, and it is wholly decentralized — there are Wikipedia communities across the globe who share some basic principles, but decide together how they’ll handle contributions that could violate the law, offend readers or anything in between. In sharp contrast to corporate social media spaces, where attention is the driver of all things, this is a totally different way to “scale up” — more like scaling out — and it has led to a dramatically different kind of information resource.

I recently spoke with two Wikipedia volunteers in Wales, who are seriously worried about the effects that the U.K. bill might have on Wikipedia’s Welsh-language site, which is the only Wikipedia community that exists almost entirely within the jurisdiction of the U.K. Robin Owain and Jason Evans explained to me just how essential Wikipedia has become for Welsh speakers — with 90 million views in the last 12 months, Welsh Wikipedia is the largest and most popular Welsh-language website on the internet. Young people are a big part of this, and the secondary school system in Wales works actively with the community to engage high school students in building up material on the site. 

For Owain and Evans, this is fundamental to their purpose. “We want young people to feel as though the internet’s something that you can interact with,” Evans said. But the U.K.’s new online safety law could take that away. The two surmise that once the bill is enacted, it will be nearly impossible to allow people under 18 to contribute to the site. It could, as Evans put it, “really reinforce the idea that the internet is just a place to get information, that it’s not something you can be a part of.” 

They also worry that the bill’s requirements regarding content could leave contributors fearful of violating the law. “If there’s anything contentious, anything that has adult themes or strong language, no matter how true something might be, or how factual, there will be a concern that if it’s left on Wiki, there’s a risk that young people will see it and we’ll fall foul of the bill,” said Evans. “That in itself does create an atmosphere where you are essentially censoring Wikipedia, and that goes against everything Wikipedia is about.”

It also stings, the two noted, since the U.K. bill was written with the biggest of Big Tech companies in mind. For some reason, its authors couldn’t be persuaded to make a carve-out for projects like Wikipedia. But Owain has some hope that Welsh people and the Welsh government — a Labour party-dominated legislature that does ultimately answer to the British parliament — just might have something to say about it. 

“I should think the whole of Wales would stand up as one and say, ‘Oh! We will access Wikipedia!’ and the Welsh government will support it,” Owain said, raising a fist in the air. I hope he’s right.

Pro-Palestinian messages are getting shadowbanned and horribly mistranslated on social media. Over the past two weeks, multiple journalists, artists, Instagram influencers and even New York Times reporter Azmat Khan reported that their posts containing words like “Palestine” and “Gaza” simply weren’t reaching followers. To make matters worse, a handful of Instagram users found that the platform was spontaneously inserting the word “terrorist” into its machine-translations of the word “Palestinian” from Arabic to English. This reminds me of 2021, when the Al-Aqsa Mosque in Jerusalem was mistakenly labeled as a “dangerous organization” by the same platform. The takeaway here is that Meta, Facebook and Instagram’s parent company, has told its computers to use things like the U.S. government’s list of designated terror groups in order to identify content that could spark violence. This might sound reasonable on the surface, but when you throw in a little artificial intelligence and some plain old human bias, it can get ugly.

Meta has a long history of mistreating speech about Palestine, and while the company is always quick to blame the tech (it’s a “glitch,” the execs say), the evidence suggests that it is not that simple. Between the U.S. government’s list of designated terror groups, Meta’s own list of “dangerous individuals and organizations,” the EU’s Digital Services Act, soft pressure from the U.S. and Israel alike, and a set of community standards that seems to get more complicated by the day, it seems like the decks are stacked against Palestinians who are just trying to say what they feel right now. I will keep my eyes peeled for further “glitches” in the weeks ahead.

Venezuela saw a smattering of web outages over the weekend, during  the political opposition’s presidential primary election, the first to be held since 2012. This was no ordinary vote — public trust in the country’s electoral system is extraordinarily low, due to a history of election fraud allegations and the ruling United Socialist Party’s routine efforts to block bids by its opponents. Opposition organizers created an independent entity, the National Primary Commission, to oversee the election and set up polling places in churches and at people’s homes, rather than using publicly managed buildings like schools and community centers. Over the weekend, the network monitoring group NetBlocks documented huge drops in connectivity in Caracas, and Venezuela Sin Filtro, a censorship monitoring group, reported that websites which listed polling places were inaccessible on most telecom networks. The group also presented evidence that the systems used to count the votes — an estimated 1.5 million people cast their ballots, both inside and outside the country — were hit with cyberattacks. Out of a crowded field, María Corina Machado, a conservative former lawmaker, had won more than 90% of the votes counted by mid-week.

Apple has a problem with Jon Stewart. Last week, the cherished TV comic abruptly canceled the third season of “The Problem with Jon Stewart,” his show on streaming service Apple TV, after the company reportedly pushed back on the script for an episode in which he planned to discuss AI and China. We don’t hear much about Apple in stories about content control and Big Tech, but between the App Store, Apple TV and Apple Podcasts, the company has a huge amount of discretion over what kinds of media and apps its users can most easily access. And when it comes to China — home to the Foxconn factory where half of the world’s iPhones are manufactured — the company has often been quick to bow to censorship demands. There’s been no further information about what exactly Stewart had planned to talk about, but it’s easy to imagine that it might have had Apple’s overlords worried about offending their Chinese business partners.

WHAT WE’RE READING

  • My friend Oiwan Lam, an intrepid Hong Konger who has kept her ear to the ground and her finger on the pulse of the Chinese internet through all the political ups and downs of the past decade, translated a fascinating exclusive interview by a YouTuber known as Teacher Li with a censorship worker from mainland China. Give it a read.
  • In a new essay for Time magazine, Heidy Khlaaf, who specializes in AI safety in high-stakes situations, says we should regulate AI in the same way we do nuclear weapons.
  • The fraud trial of Sam Bankman-Fried, founder of the cryptocurrency exchange FTX, is now well underway in New York. This piece in The Ringer puts you right in the courtroom.

The post How the new UK tech law hurts Wikipedia appeared first on Coda Story.

]]>
The smart city where everybody knows your name https://www.codastory.com/authoritarian-tech/kazakhstan-smart-city-surveillance/ Thu, 26 Oct 2023 10:05:13 +0000 https://www.codastory.com/?p=47305 In small-town Kazakhstan, an experiment with the “smart city” model has some residents smiling. But it also signals the start of a new mass surveillance era for the Central Asian nation.

The post The smart city where everybody knows your name appeared first on Coda Story.

]]>
At first glance, Aqkol looks like most other villages in Kazakhstan today: shoddy construction, rusting metal gates and drab apartment blocks recall its Soviet past and lay bare the country’s uncertain economic future. But on the village’s outskirts, on a hill surrounded by pine trees, sits a large gray and white cube: a central nervous system connecting thousands of miles of fiber optic cables, sensors and data terminals that keeps tabs on the daily comings and goings of the village’s 13,000 inhabitants. 

This is the command center of Smart Aqkol, a pilot study in digitized urban infrastructure for Kazakhstan. When I visited, Andrey Kirpichnikov, the deputy director of Smart Aqkol, welcomed me inside. Donning a black Fila tracksuit and sneakers, the middle-aged Aqkol native scanned his face at a console that bore the logo for Hikvision, the Chinese surveillance camera manufacturer. A turnstyle gave a green glow of approval and opened, allowing us to walk through. 

“All of our staff can access the building using their unique face IDs,” Kirpichnikov told me.

He led me into a room with a large monitor displaying a schematic of the village. The data inputs and connected elements that make up Smart Aqkol draw on everything from solar panels and gas meters to GPS trackers on public service vehicles and surveillance cameras, he explained. Analysts at the command center report their findings to the mayor’s office, highlighting data on energy use, school attendance rates and evidence for police investigations. 

“I see a huge future in what we’re doing here,” Kirpichnikov told me, gesturing at a heat map of the village on the big screen. “Our analytics keep improving and they are only going to get better as we expand the number of sensory inputs.”

“We’re trying to make life better, more efficient and safer,” he explained. “Who would be opposed to such a project?”

Much of Aqkol’s housing and infrastructure is from the Soviet-era.

Smart Aqkol presents an experimental vision of Kazakhstan’s economic prospects and its technocratic leadership’s governing ambitions. In January 2019, when then-President Nursultan Nazarbayev spoke at the project’s launch, he waxed about a future in which public officials could use networked municipal systems to run Kazakhstan “like a company.” The smart city model is appealing for leaders of the oil-rich nation, which has struggled to modernize its economy and shed its reputation for rampant government corruption. But analysts I spoke with say it also marks a turn toward Chinese-style public surveillance systems. Amid the war in Ukraine, Kazakhstan’s engagement with China has deepened as a way to hedge against dependence on Russia, its former colonial patron.

Kazakhstan’s smart city initiatives aren’t starting from a digital zero. The country has made strides in digitizing public services, and now ranks second among countries of the former Soviet Union in the United Nations’ e-governance development index. (Estonia is number one.) The capital Astana also has established itself as a regional hub for fintech innovation. 

And it’s not only government officials who want these systems. “There is a lot of domestic demand, not just from the state but also from Kazakhstan’s middle class,” said Erica Marat, a professor at the U.S. National Defense University. There’s an allure about smart city systems, which in China and other Asian cities are thought to have improved living standards and reduced crime.

They also hold some promise of increasing transparency around the work of public officials. “The government hopes that digital platforms can overcome cases of petty corruption,” said Oyuna Baldakova, a technology researcher at King’s College London. This would be a welcome shift for Kazakhstan, which currently ranks 101st out of 180 countries on Transparency International’s Corruption Perceptions Index.

Beyond the town’s main street, many roads remain unpaved in Aqkol.

But the pilot in Aqkol doesn’t quite align with these grander ambitions, at least not yet. Back at the command center, Kirpichnikov described how Aqkol saw a drop in violent crime and alcohol-related offenses after the system’s debut. But in a town of this size, where crime rates rarely exceed single digits, these kinds of shifts don’t say a whole lot. 

As if to better prove the point, the team showed me videos of crime dramatizations that they recorded using the Smart Aqkol surveillance camera system. In the first video, one man lifted another off the ground in what was meant to mimic a violent assault, but looked much more like the iconic scene where Patrick Swayze lifts Jennifer Grey overhead at the end of “Dirty Dancing.” Another featured a man brandishing a Kalashnikov in one hand, while using the other to hold his cellphone to his ear. In each case, brightly colored circles and arrows appeared on the screen, highlighting “evidence” of wrongdoing that the cameras captured, like the lift and the Kalashnikov.

Kirpichnikov then led me into Smart Aqkol’s “situation room,” where 14 analysts sat facing a giant LED screen while they tracked various signals around town. Contrary to the high-stakes energy that one might expect in a smart city situation room, the atmosphere here felt more like that of a local pub, with the analysts trading gossip about neighbors as they watched them walk by on monitors for street-level cameras.

Kirpichnikov explained that residents can connect their gas meters to their bank accounts and set up automatic gas payments. This aspect of Smart Aqkol has been a boon for the village. Residents I spoke with praised the new payment system — for decades, the only option was to stand in line to pay for their bills, an exercise that could easily take half a day’s time.

And there was more. To highlight the benefits of Smart Aqkol’s analytics work, Kirpichnikov told me about recent finding: “We were able to determine that school attendance is lower among children from poorly insulated households.” He pointed to a gradation of purple squares showing variance in heating levels across the village. “We could improve school grades, health and the living standards of residents just by updating our old heating systems,” he said.

Kirpichnikov might be right, but step away from the clean digital interface and any Aqkol resident could tell you that poor insulation is a serious problem in the apartment blocks where most people live, especially in winter when temperatures dip below freezing most nights. Broken windows covered with only a thin sheet of cellophane are a common sight. 

Walking around Aqkol, I was struck by the absence of paved roads and infrastructure beyond the village’s main street. Some street lamps work, but others don’t. And the public Wi-Fi that the village prides itself on offering only appeared to function near government buildings.

Informational signs for free Wi-Fi hang across the village despite the network’s limited reach.

The village also has two so-called warm bus shelters — enclosed spaces with heat lamps to shelter waiting passengers during the harsh Kazakh winters. The stops are supposed to have Wi-Fi, charging ports for phones and single-channel TVs. When I passed by one of the shelters, I met an elderly Aqkol resident named Vera. “All of these things are gone,” she told me, waving her hand at evidence of vandalism. “Now all that’s left is the camera at the back.”

“I don’t know why we need all this nonsense here when we barely have roads and running water,” she added with a sigh. “Technology doesn’t make better people.”

Vera isn’t alone in her critique. Smart Aqkol has brought the village an elaborate overlay of digitization, but it’s plain to see that Aqkol still lags far behind modern Kazakh cities like Astana and Almaty when it comes to basic infrastructure. A local resident named Lyubov Gnativa runs a YouTube channel where she talks about Aqkol’s lack of public services and officials’ failures to address these needs. The local government has filed police reports against Gnativa over the years, accusing her of misleading the public.

And a recent documentary made by Radio Free Europe/Radio Liberty — titled “I Love My Town, But There’s Nothing Smart About It” — corroborates many of Gnativa’s observations and includes interviews with with dozens of locals drawing attention to water issues and the lack of insulation in many of the village’s homes.

But some residents say they are grateful for how the system has contributed to public safety. Surveillance cameras now monitor the village’s main thoroughfare from lampposts, as well as inside public schools, hospitals and municipal buildings.

“These cameras change the way people behave and I think that’s a good thing,” said Kirpichnikov. He told a story about a local woman who was recently harassed on a public bench, noting that this kind of interaction would often escalate in the past. “The woman pointed at the camera and the man looked up, got scared and began to walk away.”

A middle-aged schoolteacher named Irina told me she feels much safer since the project was implemented in 2019. “I have to walk through a public park at night and it can be intimidating because a lot of young men gather there,” she said. “After the cameras were installed they never troubled me again.”

A resident of Aqkol.

The Smart Aqkol project was the result of a deal between Kazakhtelecom, Kazakhstan’s national telecommunications company; the Eurasian Resources Group, a state-backed mining company; and Tengri Lab, a tech startup based in Astana. But the hardware came through an agreement under China’s Digital Silk Road initiative, which seeks to wire the world in a way that tends to reflect China’s priorities when it comes to public infrastructure and social control. Smart Aqkol uses surveillance cameras made by Chinese firms Dahua and Hikvision, which in China have been used — and touted, even — for their ability to track “suspicious” people and groups. Both companies are sanctioned by the U.S. due to their involvement in surveilling and aiding in the repression of ethnic Uyghurs in Xinjiang, an autonomous region in western China.

Critics are wary of these kinds of systems in Kazakhstan, where skepticism of China’s intentions in Central Asia has been growing. The country is home to a large Uyghur diaspora of more than 300,000 people, many of whom have deep ties to Xinjiang, where both ethnic Uyghurs and ethnic Kazakhs have been systematically targeted and placed in “re-education” camps. Protests across Kazakhstan in response to China’s mass internment campaign have forced the government to negotiate the release of thousands of ethnic Kazakhs from China, but state authorities have walked this line carefully, in an effort to continue expanding economic ties with Beijing.

Although Kazakhstan requires people to get state permission if they want to hold a protest — and permission is regularly denied — demonstrations nevertheless have become increasingly common in Kazakhstan since 2018. With Chinese-made surveillance tech in hand, it’s become easier than ever for Kazakh authorities to pinpoint unauthorized concentrations of people. Hikvision announced in December 2022 that its software is used by Chinese police to set up “alarms” that are triggered when cameras detect “unlawful gatherings” in public spaces. The company also has claimed that its cameras can detect ethnic minorities based on their unique facial features.

Much of Aqkol’s digitized infrastructure shows its age.

Marat of U.S. National Defense University noted the broader challenges posed by surveillance tech. “We saw during the Covid-19 pandemic how quickly such tech can be adapted to other purposes such as enforcing lockdowns and tracing people’s whereabouts.”

“Such technology could easily be used against protest leaders too,” she added.

In January 2022, instability triggered by rising energy prices resulted in the government issuing “shoot to kill” orders against protesters — more than 200 people were killed in the ensuing clashes. The human rights news and advocacy outlet Bitter Winter wrote at the time that China had sent a video analytics team to Kazakhstan to use cameras it had supplied to identify and arrest protesters. Anonymous sources in their report alleged that the facial profiles of slain protesters were later compared with the facial data of individuals who appeared in surveillance video footage of riots, in an effort to justify government killings of “terrorists.”

With security forming a central promise of the smart city model, broad public surveillance is all but guaranteed. The head of Tengri Lab, the company leading the development of Smart Aqkol, has said in past interviews that school security was a key motivation behind the company’s decision to spearhead the use of artificial intelligence-powered cameras.

“After the high-profile incident in Kerch, we added the ability to automatically detect weapons,” he said, referencing a mass shooting at a college in Russian-occupied Crimea that left more than 20 people dead in October 2018. In that same speech he made an additional claim: “All video cameras in the city automatically detect massive clusters of people,” a veiled reference to the potential for this technology to be used against protesters.

Soon, there will be more smart city systems across Kazakhstan. Smart Aqkol and Kazakhtelecom have signed memorandums of understanding with Almaty, home to almost 2 million people, and Karaganda, with half a million, to develop similar systems. “The mayor of Karaganda was impressed by our technology and capabilities, but he was mainly interested in the surveillance cameras,” Kirpichnikov told me.

As to the question of whether these systems share data with Chinese officials, “we simply don’t have a clear answer on who has the data and how it is used,” Marat told me. “We can’t say definitively whether China has access but we know its companies are extremely dependent on the Chinese state.”

When I reached out to Tengri Lab to ask whether there are concerns regarding the safety of private data connected to the project, the company declined to comment.

Residents of Aqkol.

What does all this mean for Aqkol? The village is so small that the faces captured on camera are rarely those of strangers. The analysts told me they recognize most of the town’s 13,000 inhabitants between them. I asked whether this makes people uncomfortable, knowing their neighbors are watching them at all times.

Danir, a born-and-raised Aqkol analyst in the situation room, told me he doesn’t believe the platform will be abused. “All my friends and family know I am watching from this room and keeping them safe,” he said. “I don’t think anybody feels threatened — we are their friends, their neighbors.”

“People fear what they don’t understand and people complain about the cameras until they need them,” said Kirpichnikov. “There was a woman once who spoke publicly against the project but after we returned her lost handbag — after we spotted it on a camera — she started to see the benefits of what we are building here.”

After a few years with the system up and running, “it’s normal,” said Danir with a shrug. “Nobody has complained to me.”

For regular people, it doesn’t mean a whole lot. And that may be OK, at least for now. As Irina, the young school teacher whom I met on the village’s main thoroughfare, put it: “I don’t really know what a smart city is, but I like living here. They say we’re safer and my bills are lower than they used to be, and I’m happy.”

The post The smart city where everybody knows your name appeared first on Coda Story.

]]>
When AI doesn’t speak your language https://www.codastory.com/authoritarian-tech/artificial-intelligence-minority-language-censorship/ Fri, 20 Oct 2023 14:07:03 +0000 https://www.codastory.com/?p=47275 Better tech could do a lot of good for minority language speakers — but it could also make them easier to surveil

The post When AI doesn’t speak your language appeared first on Coda Story.

]]>
If you want to send a text message in Mongolian, it can be tough – it’s a script that most software doesn’t recognize. But for some people in Inner Mongolia, an autonomous region in northern China, that’s a good thing.

When authorities in Inner Mongolia announced in 2020 that the language would no longer be the language of instruction in schools, ethnic Mongolians — who make up about 18% of the population — feared the loss of their language, one of the last remaining markers of their distinctive identity. The news and then plans for protest flowed across WeChat, China’s largest messaging service. Parents were soon marching by the thousands in the streets of the local capital, demanding that the decision be reversed.

Why did we write this story?

The AI industry so far is dominated by technology built by and for English speakers. This story asks what the technology looks like for speakers of less common languages, and how that might change in the near term.

With the remarkable exception of the so-called Zero Covid protests of 2022, demonstrations of any size are incredibly rare in China, partially because online surveillance prevents large numbers of people from openly discussing sensitive issues in Mandarin, much less planning public marches. With automated surveillance technologies having a hard time with Mongolian though, protestors had the advantage of being able to coordinate with relative freedom. 

Most of the world’s writing systems have been digitized using centralized standard code (known as Unicode), but the Mongolian script was encoded so sloppily that it is barely usable. Instead, people use a jumble of competing, often incompatible programs when they need to type in Mongolian. WeChat has a Mongolian keyboard, but it’s unwieldy and users often prefer to send each other screenshots of text instead. The constant exchange of images is inconvenient, but it has the unintended benefit of being much more complicated for authorities to monitor and censor.

All but 60 of the world’s roughly 7,000 languages are considered “low-resource” by artificial intelligence researchers. Mongolian belongs to the vast majority of languages barely represented on the internet whose speakers deal with many challenges resulting from the predominance of English on the global internet. As technology improves, automated processes across the internet — from search engines to social media sites — may start to work a lot better for under-resourced languages. This could do a lot of good, giving those language speakers access to all kinds of tools and markets, but it will likely also reduce the degree to which languages like Mongolian fly under the radar of censors. The tradeoff for languages that have historically hovered on the margins of the internet is between safety and convenience on one hand, and freedom from censorship and intrusive eavesdropping on the other.

Back in Inner Mongolia, when parents were posting on WeChat about their plans to protest, it became clear that the app’s algorithms couldn’t make sense of the jpegs of Mongolian cursive, said Soyonbo Borjgin, a local journalist who covered the protests. The images and the long voice messages that protesters would exchange were protected by the Chinese state’s ignorance — there were no AI resources available to monitor them, and overworked police translators had little chance of surveilling all possibly subversive communication. 

China’s efforts to stifle the Mongolian language within its borders have only intensified since the protests. Keen on the technological dimensions of the battle, Borjgin began looking into a machine learning system that was being developed at Inner Mongolia University. The system would allow computers to read images of the Mongolian script, after being fed and trained on digital reams of printed material that had been published when Mongolian still had Chinese state support. While reporting the story, Borjgin was told by the lead researcher that the project had received state money. Borjgin took this as a clear signal: The researchers were getting funding because what they were doing amounted to a state security project. The technology would likely be used to prevent future dissident organizing.

First-graders on the first day of school in Hohhot, Inner Mongolia Autonomous Region of China in August 2023. Liu Wenhua/China News Service/VCG via Getty Images.

Until recently, AI has only worked well for the vanishingly small number of languages with large bodies of texts to train the technology on. Even national languages with hundreds of millions of speakers, like Bangla, have largely remained outside the priorities of tech companies. Last year, though, both Google and Meta announced projects to develop AI for under-resourced languages. But while newer AI models are able to generate some output in a wide set of languages, there’s not much evidence to suggest that it’s high quality. 

Gabriel Nicholas, a research fellow at the Center for Democracy and Technology, explained that once tech companies have established the capacity to process a new language, they have a tendency to congratulate themselves and then move on. A market dominated by “big” languages gives them little incentive to keep investing in improvements. Hellina Nigatu, a computer science PhD student at the University of California, Berkeley, added that low-resource languages face the risk of “constantly trying to catch up” — or even losing speakers — to English.

Researchers also warn that even as the accuracy of machine translation improves, language models miss out on important, culturally specific details that can have real-world consequences. Companies like Meta, which partially rely on AI to review social media posts for things like hate speech and violence, have run into problems when they try to use the technology for under-resourced languages. Because they’ve been trained on just the few texts available, their AI systems too often have an incomplete picture of what words mean and how they’re used.

Arzu Geybulla, an Azerbaijani journalist who specializes in digital censorship, said that one problem with using AI to moderate social media content in under-resourced languages is the “lack of understanding of cultural, historical, political nuances in the way the language is being used on these platforms.” In Azerbaijan, where violence against Armenians is regularly celebrated online, the word “Armenian” itself is often used as a slur to attack dissidents. Because the term is innocuous in most other contexts, it’s easy for AI and even non-specialist human moderators to overlook its use. She also noted that AI used by social media platforms often lumps the Azerbaijani language together with languages spoken in neighboring countries: Azerbaijanis frequently send her screenshots of automated replies in Russian or Turkish to the hate speech reports they’d submitted in Azerbaijani.

But Geybulla believes improving AI for monitoring hate speech and incitement in Azerbaijani will lock in an essentially defective system. “I’m totally against training the algorithm,” she told me. “Content moderation needs to be done by humans in all contexts.” In the hands of an authoritarian government, sophisticated AI for previously neglected languages can become a tool for censorship. 

According to Geybulla, Azerbaijani currently has such “an old school system of surveillance and authoritarianism that I wouldn’t be surprised if they still rely on Soviet methods.” Given the government’s demonstrated willingness to jail people for what they say online and to engage in mass online astroturfing, she believes that improving automated flagging for the Azerbaijani language would only make the repression worse. Instead of strengthening these easily abusable technologies, she argues that companies should invest in human moderators. “If I can identify inauthentic accounts on Facebook, surely someone at Facebook can do that too, and faster than I do,” she said. 

Different languages require different approaches when building AI. Indigenous languages in the Americas, for instance, show forms of complexity that are hard to account for without either large amounts of data — which they currently do not have — or diligent expert supervision. 

One such expert is Michael Running Wolf, founder of the First Languages AI Reality initiative, who says developers underestimate the challenge of American languages. While working as a researcher on Amazon’s Alexa, he began to wonder what was keeping him from building speech recognition for Cheyenne, his mother’s language. Part of the problem, he realized, was computer scientists’ unwillingness to recognize that American languages might present challenges that their algorithms couldn’t understand. “All languages are seen through the lens of English,” he told me.

Running Wolf thinks Anglocentrism is mostly to blame for the neglect that Indigenous languages have faced in the tech world. “The AI field, like any other space, is occupied by people who are set in their ways and unintentionally have a very colonial perspective,” he told me. “It’s not as if we haven’t had the ability to create AI for Indigenous languages until today. It’s just no one cares.” 

American languages were put in this position deliberately. Until well into the 20th century, the U.S. government’s policy position on Indigenous American languages was eradication. From 1860 to 1978, tens of thousands of children were forcibly separated from their parents and kept in boarding schools where speaking their mother tongues brought beatings or worse. Nearly all Indigenous American languages today are at immediate risk of extinction. Running Wolf hopes AI tools like machine translation will make Indigenous languages easier to learn to fluency, making up for the current lack of materials and teachers and reviving the languages as primary means of communication.

His project also relies on training young Indigenous people in machine learning — he’s already held a coding boot camp on the Lakota reservation. If his efforts succeed, he said, “we’ll have Indigenous peoples who are the experts in natural language processing.” Running Wolf said he hopes this will help tribal nations to build up much-needed wealth within the booming tech industry.

The idea of his research allowing automated surveillance of Indigenous languages doesn’t scare Running Wolf so much, he told me. He compared their future online to their current status in the high school basketball games that take place across North and South Dakota. Indigenous teams use Lakota to call plays without their opponents understanding. “And guess what? The non-Indigenous teams are learning Lakota so that they know what the Lakota are doing,” Running Wolf explained. “I think that’s actually a good thing.”

The problem of surveillance, he said, is “a problem of success.” He hopes for a future in which Indigenous computer scientists are “dealing with surveillance risk because the technology’s so prevalent and so many people speak Chickasaw, so many people speak Lakota or Cree, or Ute — there’s so many speakers that the NSA now needs to have the AI so that they can monitor us,” referring to the U.S. National Security Agency, infamous for its snooping on communications at home and abroad.

Not everyone wishes for that future. The Cheyenne Nation, for instance, wants little to do with outsiders, he told me, and isn’t currently interested in using the systems he’s building. “I don’t begrudge that perspective because that’s a perfectly healthy response to decades, generations of exploitation,” he said.

Like Running Wolf, Borjgin believes that in some cases, opening a language up to online surveillance is a sacrifice necessary to keep it alive in the digital era. “I somewhat don’t exist on the internet,” he said. Because their language has such a small online culture, he said, “there’s an identity crisis for Mongols who grew up in the city,” pushing them instead towards Mandarin. 

Despite the intense political repression that some of China’s other ethnic minorities face, Borjgin said, “one thing I envy about Tibetan and Uyghur is once I ask them something they will just google it with their own input system and they can find the result in one second.” Even though he knows that it will be used to stifle dissent, Borjgin still supports improving the digitization of the Mongol script: “If you don’t have the advanced technology, if it only stays to the print books, then the language will be eradicated. I think the tradeoff is okay for me.”

The post When AI doesn’t speak your language appeared first on Coda Story.

]]>
Losing lifelines in Gaza https://www.codastory.com/newsletters/israel-gaza-electricity/ Thu, 19 Oct 2023 13:48:16 +0000 https://www.codastory.com/?p=47244 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Sudan’s Omar al-Bashir might be on TikTok, and dating apps are becoming dangerous in Uganda.

The post Losing lifelines in Gaza appeared first on Coda Story.

]]>
NO BATTERY LEFT

It has been more than a week since Israel cut off electricity, water, fuel and food shipments for 2.3 million people in Gaza, as part of its response to the unprecedented attacks launched by Hamas on October 7. Internet shutdowns have become an all-too-common tool of control in conflict situations around the world. But an enforced power cut takes it to another level entirely. It makes network shutdowns look like child’s play.

UN human rights chief Volker Turk, Human Rights Watch and the International Committee of the Red Cross have all said these cuts amount to a violation of international humanitarian law — in other words, a war crime.

Yet the power is still out. The blackout has caused a cascade of problems for all kinds of systems, from water pumps and sanitation to telecommunications networks, in an already catastrophic situation. Under bombardment by Israel, more than 3,000 Gazans have been killed, thousands have been injured and, according to the United Nations, about a million people displaced. 

It is getting more and more difficult for people in Gaza to stay in contact with each other, and with people outside the territory. I spoke with Asmaa Alkaisi, a recent graduate of the University of Washington’s international studies school, who came to the U.S. from Gaza, where she has lived most of her life. 

As recently as two weeks ago, Alkaisi had a daily habit of checking in with her family, most importantly her mother, on video calls. But over the past 10 days, she has been unable to reach them. She has resorted to checking lists of the dead and missing, to see if their names appear.

“If you don’t see their names in the lists of missing or killed ones, then you know that they’re OK,” she said. It has become almost impossible to reach people locally. “I have lost contact with my best friend for 11 days now,” she said. “I honestly don’t know if she’s still alive.”

She explained that reports on television have new importance. “I found out from the news and the videos that my house was completely destroyed and leveled to the ground,” she told me. “I didn’t know that from my family, I found out from the news.”

At 39 years old, Alkaisi has lived through many periods of intense conflict in Gaza, but this “tops everything we have ever been through,” she said. She told me about a classmate of hers in the U.S. who once asked if Gazans “get used to” living with the looming threat of military aggression from Israel. The question shocked her.

“Every time this happens, it brings back all the trauma, it is as if it’s the first time it is happening,” she said. “We’re all shocked, we’re all in fear, we’re all petrified of the situation. You could be the next target. That’s more scary than anything in the world.”

And just like everyone else in the territory, journalists are facing terrifying, life-threatening circumstances. The BBC’s Rushdi Abu Alouf wrote on Tuesday about his own struggle to report on the devastation while trying to keep his family safe. With so much of what is happening on the ground being called into question by actors on all sides, these accounts really matter, and they will be harder and harder to capture and preserve as the situation worsens. 

I looked at a different part of this issue last week, focusing on the wreckless spread of disinformation by people who are not on the ground. But I shied away from the most consequential reports, like the gut-wrenching — but unsupported — allegation that babies in Israel were decapitated by Hamas, thinking it would be better not to repeat this bloody narrative, lest it be perpetuated.

My former colleague Reem Al-Masri, a media policy and disinformation researcher from Jordan, called me out on this. “Yes, social media is fertile ground for disinformation, but inaccurate information is only as harmful as its reach,” she wrote in an email. “We cannot treat misinformation that stays within the galaxy of social media the same way once it has made its way to officials,” she wrote, referring to U.S. President Joe Biden. Both Israeli and U.S. officials repeated this story, only to acknowledge later that they had no evidence to support it. This kind of disinformation is uniquely dangerous, Reem cautioned, because it affects how states and other actors make wartime decisions. She’s right. Thank you, Reem.

Hamas is abusing Facebook’s livestream feature. The families of several of the nearly 200 Israelis being held hostage by Hamas have reported that their captors are breaking into their loved ones’ Facebook accounts and in some cases livestreaming attacks or messages from wherever victims are being held. The account breach at the root of this is one thing, which unfortunately isn’t a new tactic — I’ve seen police do this in situations where colleagues have been arrested or detained. And this particular use of livestream calls to mind mass shootings that have been broadcast in the same way, most famously the massacre of 51 people at two mosques in Christchurch, New Zealand, in 2017. Facebook’s parent company Meta says it’s got a war room of people fluent in Arabic and Hebrew who are reviewing posts and trying to make game-time decisions on what should stay up and what should come down — this is good, though these efforts have pitfalls of their own, as Meta’s auditors noted a few years back. But there’s no way to “review” a livestream. At this point, if I could make them get rid of the feature, I would.

Is Sudan’s Omar al-Bashir back in action? Or is it just the AI talking? While everyone in the West seems to be watching Israel and Palestine, the conflict in Sudan continues without relent. Last week, the BBC dug into The Voice of Sudan, a viral TikTok account that since August has been posting audio missives that it claims are leaked recordings from former President Omar al-Bashir, who was ousted following mass protests in 2019. This is a real eyebrow-raiser, since al-Bashir hasn’t been seen in public for more than a year. But through The Voice of Sudan account, he is apparently speaking again, sounding in good health and criticizing the Sudanese army.

Forensics experts who’ve studied the recordings say that they display hallmarks of deep fakes and that they probably were made using an off-the-shelf artificial intelligence “voice cloning” tool that could capture audio from the former president’s actual speeches and then use that material to generate convincing imitations of him. The reporters talked with Mohamed Suliman, a Sudanese AI researcher at Northeastern University whose work I’ve highlighted in the past. “What’s alarming is that these recordings could also create an environment where many disbelieve even real recordings,” he told them. This is a really good point, and it’s instructive for this moment, far beyond Sudan. With so many convincing fakes making the rounds, it seems easier every day to question what’s real.

Dating apps are becoming dangerous in Uganda. The country’s updated law that criminalizes homosexuality has been on the books for a few months now, and public data shows that 17 people were arrested under the law in June and July. Two of them were “caught” expressing an LGBTQ identity — which is now literally a crime in Uganda — on dating apps. The Kampala-based Human Rights Awareness and Promotion Forum found that in both instances, the gay men using dating apps were effectively entrapped by another user who then reported them to police.

WHAT WE’RE READING

  • The Guardian published an explosive investigation of Amazon’s warehouses in Saudi Arabia, where dozens of Nepali workers told reporters they were tricked by recruiters, forced to work under harsh conditions, laid off and then made to pay sky-high fees in order to return home.
  • Rest of World talked with Meredith Whittaker, president of Signal, about how governments from Brazil to India and now the U.K. have put the future of the privacy-first messaging app on the line.
  • Writing for The Atlantic, acclaimed AI reporter Karen Hao dug deep into the critical battle playing out between the U.S. and China over tech export controls and who owns the future of AI. Don’t miss this one.

The post Losing lifelines in Gaza appeared first on Coda Story.

]]>
How the global anti-LGBTQ movement found a home in Turkey https://www.codastory.com/disinformation/lgbtq-rights-turkey-erdogan/ Wed, 18 Oct 2023 12:40:28 +0000 https://www.codastory.com/?p=47138 An international anti-LGBTQ movement is making headway in Turkey, where the government is presenting homosexuality and transgenderism as an imposition of Western imperialism

The post How the global anti-LGBTQ movement found a home in Turkey appeared first on Coda Story.

]]>
Kursat Mican scrolled through pictures on his phone as I sat across from him at a large wooden desk. He showed me one photo: a painting of a man in a blue dress. He scrolled on, then paused and held up the phone again. This one is of two lesbians, he told me.

We were meeting at offices owned by the Yesevi Alperenler Association, a nationalist Islamist organization run by Mican, who also leads a coalition of conservative Turkish nongovernmental organizations. Dressed in a blue suit and shirt, Mican fidgeted with his pen as we talked. The 41-year-old was affable, but was eager to get to his next task.

Why did we write this story?

Grappling with a steep economic downturn and public frustration with the government’s slow response to the devastating earthquakes that hit southeast Turkey in February, President Erdogan and his allies have seized the opportunity to make the LGBTQ community a scapegoat, using similar language to a burgeoning global anti-LGBTQ rights movement.

“There was a belly dancer in front of a mosque, there were naked statues where you can see their body details, and symbols of satanism,” Mican told me. He was describing the works featured in an exhibition at ArtIstanbul Feshane, a cultural center in Istanbul’s Eyup neighborhood. In Mican’s view, the show was disrespectful of Islam and Turkey, and an attempt at spreading LGBTQ “propaganda.” “The owners of the artwork and the organizer of the exhibition will be punished,” he said.

Titled “Starting from the Middle,” the exhibition featured a diverse set of works by 300 artists and was organized by the Istanbul Metropolitan Municipality, whose president is Istanbul Mayor Ekrem Imamoglu, a member of the CHP, the secular left-wing party that represents Turkey’s main opposition party. Pieces included photographs of the Gezi Park protests in 2013 against the government’s creeping authoritarianism; a video that explores a massacre of Alevi Kurds by the Turkish army in the 1930s; and a text accompanying an installation that talks about the artist’s struggles as an LGBTQ person in Turkey.

Although the show had support from CHP-aligned public officials, other elements in Istanbul’s city government saw it differently. Last month, prosecutors in Istanbul launched an investigation into the organizers of the exhibition, which ended of its own volition in late September, on allegations of “fomenting enmity and hatred among the public or insulting them” under Article 216 of the Turkish Penal Code. The law has frequently been used to criminalize blasphemy or retaliate against critics of President Recep Tayyip Erdogan.

The “Starting from the Middle” exhibition held at the ArtIstanbul Feshane in the Eyup neighborhood in Istanbul. Photos courtesy of Ozcan Yaman.
The “Starting from the Middle” exhibition held at the ArtIstanbul Feshane in the Eyup neighborhood in Istanbul. Photos courtesy of Ozcan Yaman.
The “Starting from the Middle” exhibition held at the ArtIstanbul Feshane in the Eyup neighborhood in Istanbul. Photos courtesy of Ozcan Yaman.
The “Starting from the Middle” exhibition held at the ArtIstanbul Feshane in the Eyup neighborhood in Istanbul. Photos courtesy of Ozcan Yaman.
Previous
Next

But the case against the art show didn’t exactly start with Turkish authorities. A few days after the opening, a headline in the state-aligned newspaper Sabah read: “Istanbul Metropolitan Municipality supports LGBT perversion! Outraged exhibition in Feshane: It should be closed immediately.”

The next day, Mican led a group protest outside the exhibition with people chanting, “We don’t want perversion in our neighborhood.” ArtIstanbul Feshane is situated in the Eyup neighborhood of Istanbul, a symbolic area to Muslims in Turkey as it is home to the burial site of Abu Ayyub al-Ansari, a close companion of the Prophet Muhammad.

In early July, after they attended one of Mican’s speeches about the event, a group of men tried to break through a line of police officers in an effort to vandalize the space. Mican says he did not encourage the violence, but also said that if the exhibition had not been held in such a religious area, the reaction would have been more muted.

“The police struggled to hold the people when I was reading the statement, they had to get 10 times more security,” Mican said. “If they hadn’t done it in the Eyup neighborhood we wouldn’t see that much reaction, so many people wouldn’t even know about it. I didn’t encourage the people to do that, but the people were angry and they gave a reaction.”

And now prosecutors have launched their investigation, following a criminal complaint against the exhibition, filed by Mican’s organization. 

None of this came as a shock to the show’s curators or to the artists involved. “Every time we want to open an exhibition, especially in a conservative area, we open it with the fear of being attacked,” said Okyanus Cagri Camci, a transgender woman and interdisciplinary artist whose work was featured in the show.

For artists like Camci, the prosecution’s investigation is part of an increasingly familiar pattern, in which criticism from conservative groups and the state-aligned media are followed by legal repercussions. 

Figures like Mican appear to have increased their influence on prominent political leaders in Turkey, drawing them down a more conservative path than they walked in the past. Grappling with a steep economic downturn and public frustration with the government’s slow response to the devastating earthquakes that hit southeast Turkey in February, Erdogan and his allies have seized the opportunity to make the LGBTQ community a scapegoat, using similar language to a burgeoning global anti-LGBTQ rights movement.

This newer shade of Erdogan and his AKP party was on full display during presidential and parliamentary elections in May, when Erdogan ramped up attacks on the LGBTQ community to rally support among his right-wing and religiously conservative base. “The family institution of this nation is strong, there will be no LGBT people in this nation,” said Erdogan at a rally in April. Erdogan and his allies are also seeking to turn rhetoric into legislative changes, starting with an amendment to the constitution that would define marriage as solely between a man and woman. 

President Recep Tayyip Erdogan targeted the LGBTQ community during pre-election rallies. Mustafa Kamaci/Anadolu Agency via Getty Images.

Suleyman Soylu, deputy leader of the AKP and a former interior minister, made the erroneous claim to a group of NGOs in April that the LGBTQ community “also includes the marriage of animals and humans.” He accused the community of being under the control of Europe and the U.S., who “want a single type of human model where they follow a single universal religion, are genderless, and no one is in the family structure.” The tone and messaging in these speeches echoed the language of a swelling global movement that claims Western liberals are staging an assault on traditional family structures by imposing homosexuality and transgenderism on societies across the world. This movement has anchors in Russia, Hungary and the U.S. and is gaining a foothold in countries around the world, including, it seems, in Turkey. Mican confirmed to me that his organization has connections with groups in Russia, Hungary and Serbia — another place where LGBTQ people are facing increased hostility.

It wasn’t always like this under Erdogan, who has been president of Turkey since 2014, and served as prime minister for more than a decade prior to that. Mican lamented that as recently as two years ago, Erdogan was unwilling to talk about LGBTQ issues in the same way as he is now.

Kubra Uzun, a singer and DJ who is non-binary, has observed the same evolution, albeit from a different vantage point. Life under Erdogan was not always as bad as it is now, they said. But Uzun told me that in recent years, they’ve felt increasingly unsafe. “If I’m not playing or if I’m not having anything outside to do, like if I’m not shopping, I don’t go out anymore,” they said. “I mostly stay at home and read and listen to music.”

When we met at their home in late September, there was a small group of friends sitting in their kitchen. One was a trans woman who Uzun was hosting after she fled her home city in part because she feared for her safety. The community refers to Uzun as a mother, but they do not like being called that. “I am non-binary and mothering feels binary to me,” they told me.

Lying on the sofa and puffing on a cigarette, Uzun recounted a “golden period” in Turkey in the early 2000s, when there were fewer restrictions. 

“It was like you were in London clubbing,” they said. “You can walk freely, you can wear whatever you want.” But those times are long gone.

A Pride party in Izmir on June 3, 2023. Murat Kocabas/SOPA Images/LightRocket via Getty Images.

Although the tides began to turn following an economic recession in 2009, it was after the Gezi Park protests of 2013 that people like Uzun saw a real shift. At that time, what began as a vocal rejection of plans to build a shopping mall in a public park in Istanbul’s central Taksim Square ultimately drew hundreds of thousands of Turkish people to take a public stand against what they saw as the AKP’s erosion of secularism in Turkey and the dismantling of key democratic institutions, namely press freedom. It became a seminal moment in deepening the divide between liberal secular Turks and Islamist conservative supporters of Erdogan. 

Three years later, Turkey witnessed a failed coup attempt that was carried out by military personnel, but which Erdogan has long insisted was orchestrated by the U.S.-based Islamic cleric Fethullah Gulen. In the ensuing period, Erdogan launched a major clampdown on Turkish society, imprisoning thousands of critics of the government that he and his allies accused of being stooges of the West seeking to undermine Turkey. By 2020, nearly 100,000 people had been jailed pending trial for alleged links to the Gulen movement. From Kurds to followers of Gulen and now, increasingly, gay and trans people, Erdogan has framed a variety of groups as enemies of the state, allowing him to cast out critics while boosting his popularity among his political base. He has passed sweeping legislative and constitutional changes that curtail freedom of expression, cementing his hold on power.

Along the way, Mican and other leading conservative figures have pushed politicians to harden their stance on the issue. Prior to Istanbul’s Pride march in 2016, Mican told state officials he and his organization would intervene if the event went ahead. Mican was later fined for making threatening remarks, but the march was also banned by the Istanbul governor’s office after they cited security concerns and the need to protect public order.

For the ninth consecutive year, the Istanbul pride march was banned in June, with the AKP governor of Istanbul saying it posed a threat to family institutions. Police clad in riot gear detained 113 people who marched despite the restrictions.

Security forces put in place heightened security measures in Taksim Square and Istiklal Street. When the group tried to march on June 18, 2023, despite the ban, police intervened. Hakan Akgun via images via Getty Images.

The more Erdogan focuses on homosexuality and transgenderism, the more other parties have started putting anti-LGBTQ policies into their agendas. Mican himself underlined this point in our conversation. The Vatan Party, a nationalist secular party that has supported Erdogan, in the past used protection from the threat of terrorism as a central tenet of its platform. Now it has shifted to the so-called threat of the “LGBTQ agenda.”

Even the CHP and other opposition parties thus far have remained quiet on discrimination against the LGBTQ community, particularly around the election period, said Suay Ergin-Boulougouris, a program officer at Article 19, an international organization that promotes freedom of expression. When I asked Uzun about whether they would have felt better if the CHP had won instead of Erdogan, they responded, “Same shit, different color.”

Uzun fears that Turkey is turning into Russia, where the state frequently equates homosexuality with pedophilia and has passed a series of anti-LGBTQ laws over the past decade. Erdogan further solidified his position on gay and trans rights on the global stage in 2021, when he pulled Turkey out of the Istanbul Convention, an international treaty opposing violence against women, after religious conservative groups criticized the law, arguing that it was degrading family values and wrongly advocating for the rights of the LGBTQ community. The convention has come under attack from leaders in several Eastern European countries, who argue that the document’s definition of gender is a way to dismantle traditional distinctions between men and women and a way to “normalize” homosexuality.

Another state that has notably hit the brakes on accession to the convention is Hungary. The government of Prime Minister Viktor Orbán has also tried to push through a ban on the use of materials seen as promoting homosexuality and gender change at schools. The law is currently being challenged before the Court of Justice of the European Union, which interprets EU laws to make sure they are applied equally in every EU member state. 

Populist leaders have positioned the family as something sacrosanct and used the idea that it is being destroyed by Western liberals as a way into power, said Wendy Via, president of the U.S.-based Global Project Against Hate and Extremism.

Right-wing leaders in the U.S. and Europe have framed LGBTQ rights as an agenda, personifying the concept as an enemy entity that is taking over. But Via argues the real entity that is taking over is a vast, well-resourced network of organizations with anti-LGBTQ and anti-woman agendas.

In Turkey, that network consists of dozens of conservative NGOs, who on September 17 held a large rally called the “Big Family Gathering” in the Eminonu area of Istanbul, for which Mican was one of the key organizers.

Protestors gathered in Istanbul for an anti-LGBTQ rally on September 17, 2023. Ileker Eray/Middle East Images/AFP via Getty Images.

At the gathering, conservatively dressed mothers and their children held signs that read “Stop Pedophilia” and milled about while speaker after speaker decried Western imperialism before a crowd estimated by organizers to number in the thousands. Part way through the rally, Alexander Dugin, the far-right Russian political philosopher with close ties to Russian President Vladimir Putin, appeared on a large screen and gave the crowd a speech about the need to fight global liberalism. It is “the fight of all normal people,” he told the crowd, “to save the normal relations between sexes, to save the family, to save the dignity of the human being.”

At the end of the rally, sitting on a park bench as people bustled around us clearing away equipment, I spoke to two men in their 20s, Kayahan Cetin and Yunus Emre Ozgun. They lead Turkiye Genclik Birligi, a youth organization closely associated with the pro-Russia Vatan Party. Cetin spoke in Turkish and Ozgun helped interpret into English, sometimes chiming in himself.

The pair were proud to note their connections with Dugin and Putin’s United Russia party. Cetin and his group are associated with Vatan, but they also identify as Kemalists, a secular ideology that seeks to follow the principles of the Turkish Republic’s founder Kemal Ataturk. This means they may not always see eye to eye with the Islamist right who dominate the anti-LGBTQ movement in Turkey. But they share the common belief that LGBTQ rights present an existential threat to Turkish society and that they are an agenda being imposed by the West.

Cetin is trying to push legislation that would crack down on what they call “LGBTQ propaganda and institutions” and pointed to similar laws on the books in Russia, Hungary and China. Cetin says he has no problem with people’s individual “choice” to be gay, but wants parliament to place restrictions on organizations who are using their platforms to support LGBTQ rights through the media, including streaming platforms such as Netflix and Disney Plus. These kinds of cultural interventions are already underway — Turkey’s Radio and Television Supreme Council in July fined Netflix, Disney+, Amazon Prime Video and Mubi among other streaming platforms, accusing them of depicting homosexual relationships that are “contrary to social and cultural values and the Turkish family structure.”

With local elections in March 2024, the LGBTQ community fears Erdogan’s attacks on them will be amplified further. The government is seeking to implement laws that will ban content seen to promote LGBTQ identities in schools, a blow to younger gay and transgender people already struggling in the current environment. Last month the national education minister, Yusuf Tekin, said that authorities must fight homosexuality and that a new optional course called “The Family in Turkish Society” had been added to the school curriculum.

Two days after our first meeting, I met Uzun again at a club in the heart of Istanbul’s tourist district. There was a power cut soon after I arrived. When the lights came back on again, Uzun was quick to get back on the dancefloor. The room filled with a red glow as queer Istanbulites danced freely, the jubilant scene in stark contrast to the seismic shifts occurring beyond the walls beaded in sweat.

At the end of the night I had to wait my turn to say goodbye to Uzun. I asked them one final question about why Istanbul’s queer scene seemed to be thriving despite all the restrictions and threats against it. Uzun shouted over the music, “Text me your question.” They texted me their response the next morning: “RESISTANCE.”

But this isn’t the whole story. It is hard to resist when you fear being attacked on any street corner. Uzun told me that over the course of the past year, more than 50 of their friends had left Turkey. And they may be next. If their visa application is accepted, Uzun will leave for London.

The post How the global anti-LGBTQ movement found a home in Turkey appeared first on Coda Story.

]]>
How Big Tech is fueling — and monetizing — false narratives about Israel and Palestine https://www.codastory.com/newsletters/how-big-tech-is-fueling-and-monetizing-false-narratives-about-israel-and-palestine/ Fri, 13 Oct 2023 13:16:51 +0000 https://www.codastory.com/?p=47123 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: African tech workers take on Big Tech, Manipur bans violent images online, and the U.N. is “tech-washing” Saudi Arabia.

The post How Big Tech is fueling — and monetizing — false narratives about Israel and Palestine appeared first on Coda Story.

]]>
THE FOG OF DIGITAL DISINFORMATION

I have few words for the atrocities carried out by Hamas in Israel since October 7, and the horrors that are now unfolding in Gaza.

I have a few more for a certain class of social media users at this moment. The violence in Israel and Palestine has triggered what feels like a never-ending stream of pseudo-reporting on the conflict: allegations, rumors and straight up falsehoods about what is happening are emerging at breakneck speed. I’m not talking about posts from people who are actually on the ground and may be saying or reporting things that are not verified. That’s the real fog of war. Instead, I’m talking about posts from people who jump into the fray not because they have something urgent to report or say, but just because they can.

Social media has given many of us the illusion of total access to a conflict situation, a play-by-play in real time. In the past, this was enlightening — or at least it felt that way. During the Gaza War in 2014, firsthand civilian accounts were something you could readily find on what was then called Twitter, if you knew where to look. I remember reading one journalist’s tweets about her desperate attempt to flee Gaza at the Rafah border crossing, amid heavy shelling by Israeli forces — her story stuck with me for years, returning to my mind whenever Gaza came up. These kinds of narratives may still be out there, but they are almost impossible to find amidst the clutter. And this time around, those stories from Gaza could disappear from the web altogether, now that Israel has cut off electricity in the territory, and internet access there is in free fall.

This illusion of being close to a conflict, of being able to understand its contours from far away is no longer a product of carefully reported news and firsthand accounts on social media. Sure, there was garbage out there in 2014, but nearly a decade on, it feels as if there are just as many posts about war crimes that never happened as there are about actual atrocities that did. Our current internet, not to mention the state of artificial intelligence, makes it too easy to spread misinformation and lies. 

On October 9, tens of thousands of people shared reports that Israeli warplanes had bombed a historic church in Gaza, complete with photos that could convince anyone who hasn’t actually been to that site. The church itself posted on Facebook to discredit the reports and assure people that it remains untouched. Conflict footage from Syria, Afghanistan, and as far away as Guatemala has been “recycled” and presented as contemporary proof of brutalities committed by one side or the other. And of course there are the “videos” of airstrikes that turned out to be screengrabs from the video game “Arma 3.” Earnest fact-checking outfits and individual debunkers have rushed in to correct and inform, but it’s not clear how much difference this makes. People look to have their biases confirmed, and then scurry on through the digital chaos.

Some are even posting about the war for money. Speaking with Brooke Gladstone of “On The Media” on October 12, tech journalist Avi Asher-Shapiro pointed out that at the same time that X has dismissed most of its staff who handled violent and false content on the platform, it has created new incentives for this kind of behavior by enabling “creators” to profit from the material they post. So regardless if a post is true or not, the more likes, clicks and shares it gets, the more money its creator rakes in. TikTok offers incentives like this too.

While X appears to be the unofficial epicenter of this maelstrom, the disinformation deluge is happening on Meta’s platforms and TikTok too. All three companies are now on the hook for it in the European Union. EU Commissioner Thierry Breton issued a series of public letters to their CEOs, pointing out that under the bloc’s  Digital Services Act, they have to answer to regulatory authorities when they fail to stop the spread of content that could lead to actual harm.

The sheer volume of disinformation is hard to ignore. And it is an unconscionable distraction from the grave realities and horror of the war in Gaza.

In pursuit of mass scale, the world’s biggest social media companies designed their platforms to host limitless amounts of content. This is nearly impossible for them to oversee or manage, as the events in Israel and Palestine demonstrate. Yet from Myanmar and Sudan to Ukraine and the U.S., it has been proven again and again that violent material on social media can trigger acts of violence in real life, and that people are worse off when the algorithms get the run of the place. The companies have never fully gotten ahead of this issue. Instead, they have cobbled together a combination of technology and people to do the work of identifying the worst posts and scrubbing them from the web. 

The people — content moderators — typically review hundreds of posts each day, from videos of racist diatribes to beheadings and sexual abuse. They see the worst of the worst. If they didn’t, the platforms would be replete with this kind of material, and no one would want to use them. That is not a viable business model.

Despite the core need for robust content moderation, the Big Techs outsource most of it to third-party companies operating in countries where labor is cheap, like India or the Philippines. Or Kenya, where workers report being paid between $1 and $4 per hour and having limited access to counseling — a serious problem in a job like this.

This week, Coda Story reporter Erica Hellerstein brought us a deep dive on the lives of content moderation workers in Nairobi who over the past several months have come together to push back on what they say are exploitative labor practices. More than 180 content moderators are suing Meta for $1.6 billion over poor working conditions, low pay and what they allege was unfair dismissal after Meta switched contracting companies. Workers have also voted to form a new trade union that they hope will force big companies like Meta, and outsourcing firms like Sama, to change their ways. Erica writes:

“While it happens at a desk, mostly on a screen, the demands and conditions of this work are brutal. Current and former moderators I met in Nairobi in July told me this work has left them with post-traumatic stress disorder, depression, insomnia and thoughts of suicide.

These workers are reaching a breaking point. And now, Kenya has become ground zero in a battle over the future of content moderation in Africa and beyond. On one side are some of the most powerful and profitable tech companies on earth. On the other are young African content moderators who are stepping out from behind their screens and demanding that Big Tech companies reckon with the human toll of their enterprise.”

Odanga Madung, a Kenya-based journalist and a fellow at the Mozilla Foundation, believes the flurry of litigation and organizing represents a turning point in the country’s tech labor trajectory. In his words: “This is the tech industry’s sweatshop moment.” Don’t miss this terrific, if sobering read.

Images of violence are also at issue in Manipur, India, where a new government order has effectively banned people from posting videos and photos depicting acts of violence. This is serious because Manipur has been immersed in waves of public unrest and outbursts of ethnic violence since May. After photos of the slain bodies of two students who had gone missing in July surfaced and went viral on social media last month, authorities shut down the internet in an effort to stem unrest. In the words of the state government, the new order is intended as a “positive step towards bringing normalcy in the State.” But not everyone is buying this. On X yesterday, legal scholar Apar Gupta called the order an attempt to “contour” media narratives that would also “silence the voices of the residents of the state even beyond the internet shutdown.”

The U.N. is helping Saudi Arabia to “tech-wash” itself. This week, officials announced that the kingdom will host the world’s biggest global internet policy conference, the Internet Governance Forum (IGF), in 2024. This U.N.-sponsored gathering of governments, corporations and tech-focused NGOs might sound dull — I’ve been to a handful of them and can confirm that some of it is indeed a yawn. But some of it really matters. The IGF is a place where influential policymakers hash out ideas for how the global internet ought to work and how it can be a positive force in an open society — or how it can do the opposite. After China and Iran, I can think of few places that would be worse to do this than Saudi Arabia, a country that uses technology to exercise authoritarianism in more ways than we probably know.

The post How Big Tech is fueling — and monetizing — false narratives about Israel and Palestine appeared first on Coda Story.

]]>
Indian journalists are being treated like terrorists for doing their jobs https://www.codastory.com/authoritarian-tech/newsclick-raids-press-freedom-decline-india/ Thu, 12 Oct 2023 11:23:01 +0000 https://www.codastory.com/?p=47096 Accused of receiving Chinese funding, the founder of a digital newsroom critical of the Modi government faces terrorism charges

The post Indian journalists are being treated like terrorists for doing their jobs appeared first on Coda Story.

]]>
When India hosted the G20 summit last month, it presented itself as the “mother of democracy” to the parade of leaders and delegations from the world’s largest economies. But at home, when the world is not watching as closely, Prime Minister Narendra Modi is systematically clamping down on free speech.

In a dramatic operation that began as the sun rose on Delhi on October 3, police raided the homes of journalists across the city. Police seized laptops and mobile phones, and interrogated reporters about stories they had written and any money they might have received from foreign bank accounts. The journalists targeted by the police work for NewsClick, a small but influential website founded in 2009 by Prabir Purkayastha, an engineer by training who is also a prominent advocate for left-wing causes and ideas. 

At the time of publication, Purkayastha and a senior NewsClick executive had been held in judicial custody for 10 days. The allegations they face are classified under India’s 2019 Unlawful Activities (Prevention) Act, legislation that gives the government sweeping powers to combat terrorist activity. 

Purkayastha, a journalist of considerable standing, is effectively being likened to a terrorist.

Reporters surround NewsClick’s founder and editor Prabir Purkayastha as he is led away by the Delhi police. NewsClick is accused of accepting funds to spread Chinese propaganda. Raj K Raj/Hindustan Times via Getty Images.

The day after the raids on the more than 40 NewsClick employees and contributors, a meeting was called at the Press Club of India. Among the many writers and journalists in attendance was the internationally celebrated, Booker Prize-winning author Arundhati Roy. A longtime critic of Indian government policies, regardless of the political party in power, Roy told me that India was in “an especially dangerous moment.” 

She argued that the Modi government was deliberately conflating terrorism and journalism, that they were cracking down on what they described as “intellectual terrorism and narrative terrorism.” It has to do, she told me, “with changing the very nature of the Indian constitution and the very understanding of checks and balances.” She said the targeting of NewsClick, which has about four million YouTube subscribers, was intended as a warning against digital publications.

The Indian government had targeted NewsClick before, investigating what it said were illegal sources of foreign funding from China. For these latest raids, the catalyst appears to have been, at least in part, an investigation published in The New York Times in August that connected NewsClick to Neville Roy Singham, an Indian-American tech billionaire who, the story alleges, has funded the spread of Chinese propaganda through a “tangle of nonprofit groups and shell companies.”

In the lengthy article, The New York Times reporters made only brief mention of NewsClick, claiming that the site “sprinkled its coverage with Chinese government talking points.” They also quoted a phrase from a video that NewsClick published in 2019 about the 70th anniversary of the 1949 revolution which ended with the establishment of the People’s Republic of China: “China’s history continues to inspire the working classes.” But it appeared to be enough for the Delhi police to seize equipment from and intimidate even junior staff members, cartoonists and freelance contributors to the site. 

Angered by the unintended consequences of The New York Times report, a knot of protestors gathered outside its New York offices near Times Square a couple of days after the raids. Kavita Krishnan, an author and self-described Marxist feminist, wrote on the Indian news and commentary website Scroll that she had warned The New York Times reporters who had contacted her for comment on the Singham investigation that their glancing reference to NewsClick would give the Modi government ammunition to harass Indian journalists. 

The “NYT needs to hold its own practices up to scrutiny and ask itself if, in this case, they have allowed themselves to become a tool for authoritarian propaganda and criminalization of journalism in India,” she wrote

While The New York Times stood by its story, a Times spokesperson told Scroll that they “would find it deeply troubling and unacceptable if any government were to use our reporting as an excuse to silence journalists.”

On October 10, a Delhi court ordered that Purkayastha and NewsClick’s human resources head Amit Chakraborty be held in judicial custody for 10 days, even as their lawyers insisted that there was no evidence that NewsClick had “received any funding or instructions from China or Chinese entities.”

India’s difficult relationship with China is at a particularly low ebb, with tens of thousands of troops amassed along their disputed borders and diplomats and journalists on both sides frequently expelled. From a Western point of view, India is also being positioned as a strategically vital counterweight to Chinese dominance of the Indo-Pacific region. Though diplomatic tensions are high, India’s trade with China has — until a 0.9% drop in the first half of this year — flourished, reaching a record $136 billion last year. 

While the Indian government continues to court Chinese investment, it is suspicious of the Chinese smartphone industry — which controls about 70% of India’s smartphone market — and of any foreign stake in Indian media groups. The mainstream Indian media is increasingly controlled by corporate titans close to Modi. For instance, Mukesh Ambani and Gautam Adani, who control vast conglomerates that touch on everything from cooking oil and fashion to petroleum oil and infrastructure and who have at various points in the last year been two of the 10 richest men in the world, also own major news networks. 

By March this year, Adani completed his hostile takeover of NDTV, widely considered to have been India’s last major mainstream news network to consistently hold the Modi government to account. Independent journalists and organizations such as NewsClick that report critically on the government are now out of necessity building their own audiences on platforms such as YouTube. Cutting off these organizations’ access to funds, particularly from foreign sources, helps tighten the Modi government’s grip on India’s extensive if poorly funded media. 

Siddharth Varadarajan, a founder of the Indian news website The Wire, said that the actions taken against NewsClick are “an attack on an independent media organization at a time when many media organizations are singing the tune of the government.” It was not a surprise, he told me, that Delhi police were asking NewsClick journalists about their reporting on the farmers’ protests in India between August 2020 and December 2021. “While the government says it is investigating a crime on the level of terrorism, the main goal is to delegitimize and criminalize certain topics and lines of inquiry.”

The allegations against NewsClick’s Purkayastha and Chakraborty are classified under India’s Unlawful Activities (Prevention) Act, controversial legislation intended to give the government sweeping powers to combat terrorist activity. Under the provisions of the act, passed in 2019, the government has the power to designate individuals as terrorists before they are convicted by a court of law. It is a piece of legislation that, as United Nations special rapporteurs noted in a letter to the Indian government, undermines India’s signed commitments to uphold international human rights.

Legislative changes introduced by the Modi government include a new data protection law and a proposed Digital India Act, both of which give it untrammeled access to communications and private data. These laws also formalize its authority to demand information from multinational tech companies — India already leads the world in seeking to block verified journalists from posting content on X, the platform formerly known as Twitter — and even shut down the internet, something that it has done for days and even months on end in states across the country during periods of unrest. 

India’s willingness to clamp down on freedom of information is reflected in its steep slide down the annual World Press Freedom Index. Currently ranked 161 out of 180 countries, India has slipped by 20 places since 2014 when Modi became prime minister. “The violence against journalists, the politically partisan media and the concentration of media ownership all demonstrate that press freedom is in crisis in ‘the world’s largest democracy,’” observes Reporters Without Borders, which compiles the ranking. 

Atul Chaurasia, the managing editor at the Indian digital news platform Newslaundry, told me that “all independent and critical journalists feel genuine fear that tomorrow the government may go after them.” In the wake of the NewsClick raids, Chaurasia described the Indian government as the “father of hypocrisy,” an acerbic reference to the Modi government’s boasts about India’s democratic credentials when world leaders, including U.S. President Joe Biden, arrived in Delhi in September for the G20 summit.

When Biden and Modi held a bilateral meeting in Delhi before the summit began, Reuters reported that “the U.S. press corps was sequestered in a van, out of eyesight of the two leaders — an unusual situation for the reporters and photographers who follow the U.S. President at home and around the world to witness and record his public appearances.” Modi himself, despite being the elected leader of a democracy for nearly 10 years, has never answered questions in a press conference in India. 

Instead, Modi addresses the nation once a month on a radio broadcast titled “Mann ki baat,” meaning “words from the heart.” And he very occasionally gives seemingly scripted interviews to friendly journalists and fawning movie stars. 

As for unfriendly journalists, Purkayastha is currently in judicial custody while a variety of Indian investigative agencies are on what Arundhati Roy called a “fishing expedition,” rooting through journalists’ phones and NewsClick’s finances and tax filings in search of evidence of wrongdoing. Varadarajan of the Wire told me that the message being sent to readers and viewers of NewsClick and other sites intent on holding the Modi government to account was clear: “Don’t trust their content and don’t even think about giving them money because they are raising money for anti-national activities.”

U.S. President Joe Biden and Indian Prime Minister Narendra Modi greet each other at the G20 leaders’ summit in Delhi last month. Evan Vucci/POOL/AFP via Getty Images.

Since my conversation with Roy at the Press Club of India on October 4, it has been reported that she faces the possibility of arrest. 

Delhi’s lieutenant governor — an official appointed by the government and considered the constitutional, if unelected, head of the Indian capital — cleared the way for her to be prosecuted for stating in 2010 that in her opinion, Kashmir, the site of long-running territorial conflict between India and Pakistan, has “never been an integral part of India.” A police complaint was filed 13 years ago, but Indian regulations require state authorities to sign off on prosecutions involving crimes such as hate speech and sedition. Now they have.

Apar Gupta, a lawyer, writer and advocate for digital rights, describes the Modi government’s eagerness to use the law and law enforcement agencies against its critics as “creating a climate of threat and fear.” Young people especially, he told me, have to have “extremely high levels of motivation to follow their principles because practicing journalism now comes with the acute threat of prosecution, of censorship, of trolling, and of adverse reputational and social impacts.”

A young NewsClick reporter, requesting anonymity, told me that “with every knock at the door, I feel like they’ve finally come for me.” They described the paranoia that had gripped their parents: “My father now only contacts me on Signal because it’s end-to-end encrypted. I could never have imagined any of this.”

Following the NewsClick raids, Rajiv Malhotra, an Indian-American Hindu supremacist ideologue, appeared on a major Indian news network to openly call for the Modi government to target even more independent journalists. Malhotra singled out the People’s Archive of Rural India (PARI), a website founded by P. Sainath, an award-winning journalist committed to foregrounding the perspectives of rural and marginalized people. 

On what grounds does Malhotra suggest that the Modi government go after Sainath and PARI? The site, Malhotra told the newscaster, who does not interrupt him, encourages young villagers, Dalits (a caste once referred to as “untouchable”), Muslims and other minorities to “tell their story of dissent and grievances against the nation state.” 

Criticism of the nation and its authorities, in other words, is akin to sowing division. Whether it’s an opinion given in 2010 or a reference to Chinese funding within an article from a newspaper loathed by supporters of Modi and his Hindu nationalist ideology, the Indian government will apparently use any excuse to silence its critics. 

The post Indian journalists are being treated like terrorists for doing their jobs appeared first on Coda Story.

]]>
Silicon Savanna: The workers taking on Africa’s digital sweatshops https://www.codastory.com/authoritarian-tech/kenya-content-moderators/ Wed, 11 Oct 2023 11:11:00 +0000 https://www.codastory.com/stayonthestory/silicon-savannah-taking-on-africas-digital-sweatshops-in-the-heart-of-silicon-savannah/ Content moderators for TikTok, Meta and ChatGPT are demanding that tech companies reckon with the human toll of their enterprise.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>

 Silicon Savanna: The workers taking on Africa’s digital sweatshops

This story was updated at 6:30 ET on October 16, 2023

Wabe didn’t expect to see his friends’ faces in the shadows. But it happened after just a few weeks on the job.

He had recently signed on with Sama, a San Francisco-based tech company with a major hub in Kenya’s capital. The middle-man company was providing the bulk of Facebook’s content moderation services for Africa. Wabe, whose name we’ve changed to protect his safety, had previously taught science courses to university students in his native Ethiopia.

Why did we write this story?

The world’s biggest tech companies today have more power and money than many governments. This story offers a deep dive on court battles in Kenya that could jeopardize the outsourcing model upon which Meta has built its global empire.

Now, the 27-year-old was reviewing hundreds of Facebook photos and videos each day to decide if they violated the company’s rules on issues ranging from hate speech to child exploitation. He would get between 60 and 70 seconds to make a determination, sifting through hundreds of pieces of content over an eight-hour shift.

One day in January 2022, the system flagged a video for him to review. He opened up a Facebook livestream of a macabre scene from the civil war in his home country. What he saw next was dozens of Ethiopians being “slaughtered like sheep,” he said. 

Then Wabe took a closer look at their faces and gasped. “They were people I grew up with,” he said quietly. People he knew from home. “My friends.”

Wabe leapt from his chair and stared at the screen in disbelief. He felt the room close in around him. Panic rising, he asked his supervisor for a five-minute break. “You don’t get five minutes,” she snapped. He turned off his computer, walked off the floor, and beelined to a quiet area outside of the building, where he spent 20 minutes crying by himself.

Wabe had been building a life for himself in Kenya while back home, a civil war was raging, claiming the lives of an estimated 600,000 people from 2020 to 2022. Now he was seeing it play out live on the screen before him.

That video was only the beginning. Over the next year, the job brought him into contact with videos he still can’t shake: recordings of people being beheaded, burned alive, eaten.

“The word evil is not equal to what we saw,” he said. 

Yet he had to stay in the job. Pay was low — less than two dollars an hour, Wabe told me — but going back to Ethiopia, where he had been tortured and imprisoned, was out of the question. Wabe worked with dozens of other migrants and refugees from other parts of Africa who faced similar circumstances. Money was too tight — and life too uncertain — to speak out or turn down the work. So he and his colleagues kept their heads down and steeled themselves each day for the deluge of terrifying images.

Over time, Wabe began to see moderators as “soldiers in disguise” — a low-paid workforce toiling in the shadows to make Facebook usable for billions of people around the world. But he also noted a grim irony in the role he and his colleagues played for the platform’s users: “Everybody is safe because of us,” he said. “But we are not.”  

Wabe said dozens of his former colleagues in Sama’s Nairobi offices now suffer from post-traumatic stress disorder. Wabe has also struggled with thoughts of suicide. “Every time I go somewhere high, I think: What would happen if I jump?” he wondered aloud. “We have been ruined. We were the ones protecting the whole continent of Africa. That’s why we were treated like slaves.”

The West End Towers house the Nairobi offices of Majorel, a Luxembourg-based content moderation firm with over 22,000 employees on the African continent.

To most people using the internet — most of the world — this kind of work is literally invisible. Yet it is a foundational component of the Big Tech business model. If social media sites were flooded with videos of murder and sexual assault, most people would steer clear of them — and so would the advertisers that bring the companies billions in revenue.

Around the world, an estimated 100,000 people work for companies like Sama, third-party contractors that supply content moderation services for the likes of Facebook’s parent company Meta, Google and TikTok. But while it happens at a desk, mostly on a screen, the demands and conditions of this work are brutal. Current and former moderators I met in Nairobi in July told me this work has left them with post-traumatic stress disorder, depression, insomnia and thoughts of suicide.

These “soldiers in disguise” are reaching a breaking point. Because of people like Wabe, Kenya has become ground zero in a battle over the future of content moderation in Africa and beyond. On one side are some of the most powerful and profitable tech companies on earth. On the other are young African content moderators who are stepping out from behind their screens and demanding that Big Tech companies reckon with the human toll of their enterprise.

In May, more than 150 moderators in Kenya, who keep the worst of the worst off of platforms like Facebook, TikTok and ChatGPT, announced their drive to create a trade union for content moderators across Africa. The union would be the first of its kind on the continent and potentially in the world.

There are also major pending lawsuits before Kenya’s courts targeting Meta and Sama. More than 180 content moderators — including Wabe — are suing Meta for $1.6 billion over poor working conditions, low pay and what they allege was unfair dismissal after Sama ended its content moderation agreement with Meta and Majorel picked up the contract instead. The plaintiffs say they were blacklisted from reapplying for their jobs after Majorel stepped in. In August, a judge ordered both parties to settle the case out of court, but the mediation broke down on October 16 after the plaintiffs’ attorneys accused Meta of scuttling the negotiations and ignoring moderators’ requests for mental health services and compensation. The lawsuit will now proceed to Kenya’s employment and labor relations court, with an upcoming hearing scheduled for October 31.

The cases against Meta are unprecedented. According to Amnesty International, it is the “first time that Meta Platforms Inc will be significantly subjected to a court of law in the global south.” Forthcoming court rulings could jeopardize Meta’s status in Kenya and the content moderation outsourcing model upon which it has built its global empire. 

Meta did not respond to requests for comment about moderators’ working conditions and pay in Kenya. In an emailed statement, a spokesperson for Sama said the company cannot comment on ongoing litigation but is “pleased to be in mediation” and believes “it is in the best interest of all parties to come to an amicable resolution.”

Odanga Madung, a Kenya-based journalist and a fellow at the Mozilla Foundation, believes the flurry of litigation and organizing marks a turning point in the country’s tech labor trajectory. 

“This is the tech industry’s sweatshop moment,” Madung said. “Every big corporate industry here — oil and gas, the fashion industry, the cosmetics industry — have at one point come under very sharp scrutiny for the reputation of extractive, very colonial type practices.”

Nairobi may soon witness a major shift in the labor economics of content moderation. But it also offers a case study of this industry’s powerful rise. The vast capital city — sometimes called “Silicon Savanna” — has become a hub for outsourced content moderation jobs, drawing workers from across the continent to review material in their native languages. An educated, predominantly English-speaking workforce makes it easy for employers from overseas to set up satellite offices in Kenya. And the country’s troubled economy has left workers desperate for jobs, even when wages are low.

Sameer Business Park, a massive office compound in Nairobi’s industrial zone, is home to Nissan, the Bank of Africa, and Sama’s local headquarters. But just a few miles away lies one of Nairobi’s largest informal settlements, a sprawl of homes made out of scraps of wood and corrugated tin. The slum’s origins date back to the colonial era, when the land it sits on was a farm owned by white settlers. In the 1960s, after independence, the surrounding area became an industrial district, attracting migrants and factory workers who set up makeshift housing on the area adjacent to Sameer Business Park.

For companies like Sama, the conditions here were ripe for investment by 2015, when the firm established a business presence in Nairobi. Headquartered in San Francisco, the self-described “ethical AI” company aims to “provide individuals from marginalized communities with training and connections to dignified digital work.” In Nairobi, it has drawn its labor from residents of the city’s informal settlements, including 500 workers from Kibera, one of the largest slums in Africa. In an email, a Sama spokesperson confirmed moderators in Kenya made between $1.46 and $3.74 per hour after taxes.

Grace Mutung’u, a Nairobi-based digital rights researcher at Open Society Foundations, put this into local context for me. On the surface, working for a place like Sama seemed like a huge step up for young people from the slums, many of whom had family roots in factory work. It was less physically demanding and more lucrative. Compared to manual labor, content moderation “looked very dignified,” Mutung’u said. She recalled speaking with newly hired moderators at an informal settlement near the company’s headquarters. Unlike their parents, many of them were high school graduates, thanks to a government initiative in the mid-2000s to get more kids in school.

“These kids were just telling me how being hired by Sama was the dream come true,” Mutung’u told me. “We are getting proper jobs, our education matters.” These younger workers, Mutung’u continued, “thought: ‘We made it in life.’” They thought they had left behind the poverty and grinding jobs that wore down their parents’ bodies. Until, she added, “the mental health issues started eating them up.” 

Today, 97% of Sama’s workforce is based in Africa, according to a company spokesperson. And despite its stated commitment to providing “dignified” jobs, it has caught criticism for keeping wages low. In 2018, the company’s late founder argued against raising wages for impoverished workers from the slum, reasoning that it would “distort local labor markets” and have “a potentially negative impact on the cost of housing, the cost of food in the communities in which our workers thrive.”

Content moderation did not become an industry unto itself by accident. In the early days of social media, when “don’t be evil” was still Google’s main guiding principle and Facebook was still cheekily aspiring to connect the world, this work was performed by employees in-house for the Big Tech platforms. But as companies aspired to grander scales, seeking users in hundreds of markets across the globe, it became clear that their internal systems couldn’t stem the tide of violent, hateful and pornographic content flooding people’s newsfeeds. So they took a page from multinational corporations’ globalization playbook: They decided to outsource the labor.

More than a decade on, content moderation is now an industry that is projected to reach $40 billion by 2032. Sarah T. Roberts, a professor of information studies at the University of California at Los Angeles, wrote the definitive study on the moderation industry in her 2019 book “Behind the Screen.” Roberts estimates that hundreds of companies are farming out these services worldwide, employing upwards of 100,000 moderators. In its own transparency documents, Meta says that more than 15,000 people moderate its content in more than 20 sites around the world. Some (it doesn’t say how many) are full-time employees of the social media giant, while others (it doesn’t say how many) work for the company’s contracting partners.

Kauna Malgwi was once a moderator with Sama in Nairobi. She was tasked with reviewing content on Facebook in her native language, Hausa. She recalled watching coworkers scream, faint and develop panic attacks on the office floor as images flashed across their screens. Originally from Nigeria, Malgwi took a job with Sama in 2019, after coming to Nairobi to study psychology. She told me she also signed a nondisclosure agreement instructing her that she would face legal consequences if she told anyone she was reviewing content on Facebook. Malgwi was confused by the agreement, but moved forward anyway. She was in graduate school and needed the money.

A 28-year-old moderator named Johanna described a similar decline in her mental health after watching TikTok videos of rape, child sexual abuse, and even a woman ending her life in front of her own children. Johanna currently works with the outsourcing firm Majorel, reviewing content on TikTok, and asked that we identify her using a pseudonym, for fear of retaliation by her employer. She told me she’s extroverted by nature, but after a few months at Majorel, she became withdrawn and stopped hanging out with her friends. Now, she dissociates to get through the day at work. “You become a different person,” she told me. “I’m numb.”

This is not the experience that the Luxembourg-based multinational — which employs more than 22,000 people across the African continent — touts in its recruitment materials. On a page about its content moderation services, Majorel’s website features a photo of a woman donning a pair of headphones and laughing. It highlights the company’s “Feel Good” program, which focuses on “team member wellbeing and resiliency support.”

According to the company, these resources include 24/7 psychological support for employees “together with a comprehensive suite of health and well-being initiatives that receive high praise from our people,” Karsten König, an executive vice president at Majorel, said in an emailed statement. “We know that providing a safe and supportive working environment for our content moderators is the key to delivering excellent services for our clients and their customers. And that’s what we strive to do every day.”

But Majorel’s mental health resources haven’t helped ease Johanna’s depression and anxiety. She says the company offers moderators in her Nairobi office with on-site therapists who see employees in individual and group “wellness” sessions. But Johanna told me she stopped attending the individual sessions after her manager approached her about a topic she shared in confidentiality with her therapist. “They told me it was a safe space,” Johanna explained, “but I feel that they breached that part of the confidentiality so I do not do individual therapy.” TikTok did not respond to a request for comment by publication.

Instead, she looked for other ways to make herself feel better. Nature has been especially healing. Whenever she can, Johanna takes herself to Karura Forest, a lush oasis in the heart of Nairobi. One afternoon, she brought me to one of her favorite spots there, a crashing waterfall beneath a canopy of trees. This is where she tries to forget about the images that keep her up at night. 

Johanna remains haunted by a video she reviewed out of Tanzania, where she saw a lesbian couple attacked by a mob, stripped naked and beaten. She thought of them again and again for months. “I wondered: ‘How are they? Are they dead right now?’” At night, she would lie awake in her bed, replaying the scene in her mind.

“I couldn’t sleep, thinking about those women.”

Johanna’s experience lays bare another stark reality of this work. She was powerless to help victims. Yes, she could remove the video in question, but she couldn’t do anything to bring the women who were brutalized to safety. This is a common scenario for content moderators like Johanna, who are not only seeing these horrors in real-time, but are asked to simply remove them from the internet and, by extension, perhaps, from public record. Did the victims get help? Were the perpetrators brought to justice? With the endless flood of videos and images waiting for review, questions like these almost always go unanswered.

The situation that Johanna encountered highlights what David Kaye, a professor of law at the University of California at Irvine and the former United Nations special rapporteur on freedom of expression, believes is one of the platforms’ major blindspots: “They enter into spaces and countries where they have very little connection to the culture, the context and the policing,” without considering the myriad ways their products could be used to hurt people. When platforms introduce new features like livestreaming or new tools to amplify content, Kaye continued, “are they thinking through how to do that in a way that doesn’t cause harm?”

The question is a good one. For years, Meta CEO Mark Zuckerberg famously urged his employees to “move fast and break things,” an approach that doesn’t leave much room for the kind of contextual nuance that Kaye advocates. And history has shown the real-world consequences of social media companies’ failures to think through how their platforms might be used to foment violence in countries in conflict.

The most searing example came from Myanmar in 2017, when Meta famously looked the other way as military leaders used Facebook to incite hatred and violence against Rohingya Muslims as they ran “clearance operations” that left an estimated 24,000 Rohingya people dead and caused more than a million to flee the country. A U.N. fact-finding mission later wrote that Facebook had a “determining role” in the genocide. After commissioning an independent assessment of Facebook’s impact in Myanmar, Meta itself acknowledged that the company didn’t do “enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”

Yet five years later, another case now before Kenya’s high court deals with the same issue on a different continent. Last year, Meta was sued by a group of petitioners including the family of Meareg Amare Abrha, an Ethiopian chemistry professor who was assassinated in 2021 after people used Facebook to orchestrate his killing. Amare’s son tried desperately to get the company to take down the posts calling for his father’s head, to no avail. He is now part of the suit that accuses Meta of amplifying hateful and malicious content during the conflict in Tigray, including the posts that called for Amare’s killing.

The case underlines the strange distance between Big Tech behemoths and the content moderation industry that they’ve created offshore, where the stakes of moderation decisions can be life or death. Paul Barrett, the deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business who authored a seminal 2020 report on the issue, believes this distance helped corporate leadership preserve their image of a shiny, frictionless world of tech. Social media was meant to be about abundant free speech, connecting with friends and posting pictures from happy hour — not street riots or civil war or child abuse.

“This is a very nitty gritty thing, sifting through content and making decisions,” Barrett told me. “They don’t really want to touch it or be in proximity to it. So holding this whole thing at arm’s length as a psychological or corporate culture matter is also part of this picture.”

Sarah T. Roberts likened content moderation to “a dirty little secret. It’s been something that people in positions of power within the companies wish could just go away,” Roberts said. This reluctance to deal with the messy realities of human behavior online is evident today, even in statements from leading figures in the industry. For example, with the July launch of Threads, Meta’s new Twitter-like social platform, in July, Instagram head Adam Mosseri expressed a desire to keep “politics and hard news” off the platform.

The decision to outsource content moderation meant that this part of what happened on social media platforms would “be treated at arm’s length and without that type of oversight and scrutiny that it needs,” Barrett said. But the decision had collateral damage. In pursuit of mass scale, Meta and its counterparts created a system that produces an impossible amount of material to oversee. By some estimates, three million items of content are reported on Facebook alone on a daily basis. And despite what some of Silicon Valley’s other biggest names tell us, artificial intelligence systems are insufficient moderators. So it falls on real people to do the work.

One morning in late July, James Oyange, a former tech worker, took me on a driving tour of Nairobi’s content moderation hubs. Oyange, who goes by Mojez, is lanky and gregarious, quick to offer a high five and a custom-made quip. We pulled up outside a high-rise building in Westlands, a bustling central neighborhood near Nairobi’s business district. Mojez pointed up to the sixth floor: Majorel’s local office, where he worked for nine months, until he was let go.

He spent much of his year in this building. Pay was bad and hours were long, and it wasn’t the customer service job he’d expected when he first signed on — this is something he brought up with managers early on. But the 26-year-old grew to feel a sense of duty about the work. He saw the job as the online version of a first responder — an essential worker in the social media era, cleaning up hazardous waste on the internet. But being the first to the scene of the digital wreckage changed Mojez, too — the way he looks, the way he sleeps, and even his life’s direction.

That morning, as we sipped coffee in a trendy, high-ceilinged cafe in Westlands, I asked how he’s holding it together. “Compared to some of the other moderators I talked to, you seem like you’re doing okay,” I remarked. “Are you?”

His days often started bleary-eyed. When insomnia got the best of him, he would force himself to go running under the pitch-black sky, circling his neighborhood for 30 minutes and then stretching in his room as the darkness lifted. At dawn, he would ride the bus to work, snaking through Nairobi’s famously congested roads until he arrived at Majorel’s offices. A food market down the street offered some moments of relief from the daily grind. Mojez would steal away there for a snack or lunch. His vendor of choice doled out tortillas stuffed with sausage. He was often so exhausted by the end of the day that he nodded off on the bus ride home.

And then, in April 2023, Majorel told him that his contract wouldn’t be renewed.

It was a blow. Mojez walked into the meeting fantasizing about a promotion. He left without a job. He believes he was blacklisted by company management for speaking up about moderators’ low pay and working conditions.

A few weeks later, an old colleague put him in touch with Foxglove, a U.K.-based legal nonprofit supporting the lawsuit currently in mediation against Meta. The organization also helped organize the May meeting in which more than 150 African content moderators across platforms voted to unionize.

At the event, Mojez was stunned by the universality of the challenges facing moderators working elsewhere. He realized: “This is not a Mojez issue. These are 150 people across all social media companies. This is a major issue that is affecting a lot of people.” After that, despite being unemployed, he was all in on the union drive. Mojez, who studied international relations in college, hopes to do policy work on tech and data protection someday. But right now his goal is to see the effort through, all the way to the union’s registry with Kenya’s labor department.

Mojez’s friend in the Big Tech fight, Wabe, also went to the May meeting. Over lunch one afternoon in Nairobi in July, he described what it was like to open up about his experiences  publicly for the first time. “I was happy,” he told me. “I realized I was not alone.” This awareness has made him more confident about fighting “to make sure that the content moderators in Africa are treated like humans, not trash,” he explained. He then pulled up a pant leg and pointed to a mark on his calf, a scar from when he was imprisoned and tortured in Ethiopia. The companies, he said, “think that you are weak. They don’t know who you are, what you went through.”

A popular lunch spot for workers outside Majorel’s offices.

Looking at Kenya’s economic woes, you can see why these jobs were so alluring. My visit to Nairobi coincided with a string of July protests that paralyzed the city. The day I flew in, it was unclear if I would be able to make it from the airport to my hotel — roads, businesses and public transit were threatening to shut down in anticipation of the unrest. The demonstrations, which have been bubbling up every so often since last March, came in response to steep new tax hikes, but they were also about the broader state of Kenya’s faltering economy — soaring food and gas prices and a youth unemployment crisis, some of the same forces that drive throngs of young workers to work for outsourcing companies and keep them there.

Leah Kimathi, a co-founder of the Kenyan nonprofit Council for Responsible Social Media, believes Meta’s legal defense in the labor case brought by the moderators betrays Big Tech’s neo-colonial approach to business in Kenya. When the petitioners first filed suit, Meta tried to absolve itself by claiming that it could not be brought to trial in Kenya, since it has no physical offices there and did not directly employ the moderators, who were instead working for Sama, not Meta. But a Kenyan labor court saw it differently, ruling in June that Meta — not Sama — was the moderators’ primary employer and the case against the company could move forward.

“So you can come here, roll out your product in a very exploitative way, disregarding our laws, and we cannot hold you accountable,” Kimathi said of legal Meta’s argument. “Because guess what? I am above your laws. That was the exact colonial logic.”

Kimathi continued: “For us, sitting in the Global South, but also in Africa, we’re looking at this from a historical perspective. Energetic young Africans are being targeted for content moderation and they come out of it maimed for life. This is reminiscent of slavery. It’s just now we’ve moved from the farms to offices.”

As Kimathi sees it, the multinational tech firms and their outsourcing partners made one big, potentially fatal miscalculation when they set up shop in Kenya: They didn’t anticipate a workers’ revolt. If they had considered the country’s history, perhaps they would have seen the writing of the African Content Moderator’s Union on the wall.

Kenya has a rich history of worker organizing in resistance to the colonial state. The labor movement was “a critical pillar of the anti-colonial struggle,” Kimathi explained to me. She and other critics of Big Tech’s operations in Kenya see a line that leads from colonial-era labor exploitation and worker organizing to the present day. A workers’ backlash was a critical part of that resistance — and one the Big Tech platforms and their outsourcers may have overlooked when they decided to do business in the country.

“They thought that they would come in and establish this very exploitative industry and Kenyans wouldn’t push back,” she said. Instead, they sued.

What happens if the workers actually win?

Foxglove, the nonprofit supporting the moderators’ legal challenge against Meta, writes that the outcome of the case could disrupt the global content moderation outsourcing model. If the court finds that Meta is the “‘true employer’ of their content moderators in the eyes of the law,” Foxglove argues, “then they cannot hide behind middlemen like Sama or Majorel. It will be their responsibility, at last, to value and protect the workers who protect social media — and who have made tech executives their billions.”

But there is still a long road ahead, for the moderators themselves and for the kinds of changes to the global moderation industry that they are hoping to achieve.

In Kenya, the workers involved in the lawsuit and union face practical challenges. Some, like Mojez, are unemployed and running out of money. Others are migrant workers from elsewhere on the continent who may not be able to stay in Kenya for the duration of the lawsuit or union fight.

The Moderator’s Union is not yet registered with Kenya’s labor office, but if it becomes official, its members intend to push for better conditions for moderators working across platforms in Kenya, including higher salaries and more psychological support for the trauma endured on the job. And their ambitions extend far beyond Kenya. The network hopes to inspire similar actions in other countries’ content moderation hubs. According to Martha Dark, Foxglove’s co-founder and director, the industry’s working conditions have spawned a cross-border, cross-company organizing effort, drawing employees from Africa, Europe and the U.S.

“There are content moderators that are coming together from Poland, America, Kenya, and Germany talking about what the challenges are that they experience when trying to organize in the context of working for Big Tech companies like Facebook and TikTok,” she explained.

Still, there are big questions that might hinge on the litigation’s ability to transform the moderation industry. “It would be good if outsourced content reviewers earned better pay and were better treated,” NYU’s Paul Barrett told me. “But that doesn’t get at the issue that the mother companies here, whether it’s Meta or anybody else, is not hiring these people, is not directly training these people and is not directly supervising these people.” Even if the Kenyan workers are victorious in their lawsuit against Meta, and the company is stung in court, “litigation is still litigation,” Barrett explained. “It’s not the restructuring of an industry.”

So what would truly reform the moderation industry’s core problem? For Barrett, the industry will only see meaningful change if companies can bring “more, if not all of this function in-house.”

But Sarah T. Roberts, who interviewed workers from Silicon Valley to the Philippines for her book on the global moderation industry, believes collective bargaining is the only pathway forward for changing the conditions of the work. She dedicated the end of her book to the promise of organized labor.

“The only hope is for workers to push back,” she told me. “At some point, people get pushed too far. And the ownership class always underestimates it. Why does Big Tech want everything to be computational in content moderation? Because AI tools don’t go on strike. They don’t talk to reporters.”

Artificial intelligence is part of the content moderation industry, but it will probably never be capable of replacing human moderators altogether. What we do know is that AI models will continue to rely on human beings to train and oversee their data sets — a reality Sama’s CEO recently acknowledged. For now and the foreseeable future, there will still be people behind the screen, fueling the engines of the world’s biggest tech platforms. But because of people like Wabe and Mojez and Kauna, their work is becoming more visible to the rest of us.

While writing this piece, I kept returning to one scene from my trip to Nairobi that powerfully drove home the raw humanity at the base of this entire industry, powering the whole system, as much as the tech scions might like to pretend otherwise. I was in the food court of a mall, sitting with Malgwi and Wabe. They were both dressed sharply, like they were on break from the office: Malgwi in a trim pink dress and a blazer, Wabe in leather boots and a peacoat. But instead, they were just talking about how work ruined them.

At one point in the conversation, Wabe told me he was willing to show me a few examples of violent videos he snuck out while working for Sama and later shared with his attorney. If I wanted to understand “exactly what we see and moderate on the platform,” Wabe explained, the opportunity was right in front of me. All I had to do was say yes.

I hesitated. I was genuinely curious. A part of me wanted to know, wanted to see first-hand what he had to deal with for more than a year. But I’m sensitive, maybe a little breakable. A lifelong insomniac. Could I handle seeing this stuff? Would I ever sleep again?

It was a decision I didn’t have to make. Malgwi intervened. “Don’t send it to her,” she told Wabe. “It will traumatize her.”

So much of this story, I realized, came down to this minute-long exchange. I didn’t want to see the videos because I was afraid of how they might affect me. Malgwi made sure I didn’t have to. She already knew what was on the other side of the screen.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>
How AI is supercharging political disinformation ops https://www.codastory.com/newsletters/how-ai-is-supercharging-political-disinformation-ops/ Thu, 05 Oct 2023 13:17:18 +0000 https://www.codastory.com/?p=46912 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Russia hands a hefty prison sentence to a YouTuber and critics pan the new Elon Musk biography

The post How AI is supercharging political disinformation ops appeared first on Coda Story.

]]>
Were Slovakia’s elections rigged? Or was that just the artificial intelligence talking? Two days before Slovakians went to the polls last week, an explosive post made the rounds on Facebook. It was an audio recording of Progressive Slovakia party leader Michal Simecka telling a well-known journalist about his plan to buy votes from the country’s marginalized Roma minority. Or at least, that is what it sounded like. There was sufficient reason to believe that Simecka might have been desperate enough to do whatever it took to win the election — his party had been polling neck and neck against that of former Prime Minister Robert Fico, who resigned from the job back in 2018 amid anti-corruption protests following the murders of journalist Jan Kuciak and his fiancee Martina Kusnirova.

Simecka and the journalist who featured in the audio clip both quickly called it a fake, and fact-checking groups backed up their claims, noting that the digital file showed signs of having been manipulated using AI. But they were in a tough spot — the recording emerged during the 48-hour pre-polling period in which the media and politicians are restricted by law from speaking about elections at all. In the end, Progressive Slovakia lost to Fico’s Smer-SD party, and the political winds have quickly shifted. Fico ran on a populist platform, pledging that his government would “not give a single bullet” to Ukraine. Already heeding Fico’s word, the sitting president opposed a new military aid package for Ukraine just yesterday. And now Fico is expected to forge an alliance with Hungary’s Viktor Orban, the only EU head of state who has sided with Russia since the war began.

The possibility that a piece of evidence was fabricated using AI throws a new digital wrench into the already chaotic and oversaturated media landscape that all voters face in any election cycle. Slovakia isn’t the first country to run into this problem, and it definitely won’t be the last. Similar circumstances are expected in the run-up to Poland’s parliamentary elections later this month, where the war in Ukraine will very much be on the ballot, and where a victory for the right-wing Law and Justice party could add to Orban’s growing camp.

While the debunked audio clip in Slovakia was dutifully garnished with a fact-check label indicating that it may have been fabricated, it’s still making the rounds on Facebook. 

In fact, Meta (owner of Facebook, Instagram and Threads) and Google (owner of YouTube) have both indicated in recent months their plans to roll back some of the disinformation-busting efforts that they trotted out following the 2016 election in the U.S. But it is X, formerly known as Twitter, that is leading in the race to the bottom — every week, we see more signs that it has little interest in enforcing its rules on disinformation. 

Even the EU itself has brought this up: Last week, European Commission Vice President Vera Jourova called X out on the issue. “Russian propaganda and disinformation is still very present on online platforms. This is not business as usual; the Kremlin fights with bombs in Ukraine, but with words everywhere else, including in the EU,” Jourova said.

Although I was never all that convinced by their fact-checking efforts, it doesn’t help that the tech giants seem to have thrown up their hands on the issue. It leaves me almost nostalgic for a time when all we had to deal with was straight-up false or racist messages flooding the zone. Turns out, things could and did get worse. 2024, here we come.

GLOBAL NEWS

A Russian blogger was sentenced to eight and a half years in prison after being convicted of reporting “fake” news about Russian military actions in Ukraine. This type of journalism became a crime in Russia shortly after Russian forces launched the full-scale invasion of Ukraine in February 2022. Aleksandr Nozdrinov was arrested not long after the war began, and was finally dealt a sentence this week. Nozdrinov maintained a YouTube channel where he regularly posted video evidence of police corruption and malfeasance for an audience of more than 34,000 subscribers. According to the Committee to Protect Journalists, Nozdrinov denies having posted the material cited by prosecutors. He believes that the case against him was fabricated by authorities intent on targeting him in retaliation for his anti-corruption activities on YouTube.

Monday marked the fifth anniversary of the murder of Washington Post columnist Jamal Khashoggi, a Saudi exile and frequent critic of the Saudi Arabian regime. There is little doubt that Khashoggi’s gruesome killing inside the Saudi consulate in Istanbul came at the behest of Crown Prince Mohammed bin Salman. It came out later on that Khashoggi and some of his closest family members and colleagues were targeted with Pegasus, the notoriously invasive mobile phone spyware built by the Israeli firm NSO Group and used to spy on journalists in more than 50 countries, from Mexico to Morocco to India. The digital dimensions of Saudi Arabia’s tactics of repression don’t stop here, and they certainly are not news. But they do bear repeating.

Researchers in Australia think anti-Indigenous narratives on social media could swing an upcoming referendum. Tomorrow, Australians will vote on whether or not the country should establish a body that would advise the government on policy decisions affecting Aboriginal and Torres Strait Islander communities. A year ago, public opinion polls indicated that most Aussies — including Prime Minister Anthony Albanese — were in favor of the measure. But that has changed in recent months, and social science researchers say viral, racialized anti-Indigenous messaging campaigns on X and TikTok might have something to do with it. The Conversation is running a series on the issue — they’re worth a read.

WHAT I’M NOT READING: THE NEW MUSK BIOGRAPHY

Instead of reading Walter Isaacson’s new biography of Elon Musk, I have been lapping up the reviews and emoji-hearting other people’s dedication to pointing out everything that somehow failed to make the cut in this 670-page “insight-free doorstop of a book” (Gary Shteyngart’s words, not mine).

In the tome’s final pages, Isaacson writes: “Sometimes great innovators are risk-seeking man-children who resist potty training.” Um, what? As Jill Lepore wrote in The New Yorker: “This is a disconcerting thing to read on page 615 of a biography of a fifty-two-year-old man about whom a case could be made that he wields more power than any other person on the planet who isn’t in charge of a nuclear arsenal.” Since Isaacson didn’t, Lepore took it upon herself to school readers on some of the harsh political realities of apartheid-era South Africa where Musk grew up, noting that his maternal grandfather apparently moved the family from Canada to South Africa because of apartheid. She touches on grandpa’s openly antisemitic views, which Isaacson somehow writes off as “quirky.”

The book also has some pretty serious whoopsies when it comes to details about Musk’s financial moves. In Financial Times columnist Bryce Elder’s acid assessment: “When it comes to money, Isaacson is more a transcriber than a biographer.” Eesh.

Writing for The Atlantic, Sarah Frier had what feels to me like the truest line: “We don’t need to understand how he thinks and feels as much as we need to understand how he managed to amass so much power, and the broad societal impact of his choices — in short, how thoroughly this mercurial leader of six companies has become an architect of our future.” 

The post How AI is supercharging political disinformation ops appeared first on Coda Story.

]]>
Why politicians are such couch potatoes when it comes to corruption https://www.codastory.com/newsletters/why-politicians-are-such-couch-potatoes-when-it-comes-to-corruption/ Wed, 04 Oct 2023 14:49:51 +0000 https://www.codastory.com/?p=46895 Oligarchy is a weekly newsletter written by Oliver Bullough, tracking how the super rich are changing the world for the rest of us.

The post Why politicians are such couch potatoes when it comes to corruption appeared first on Coda Story.

]]>
HELLO AND GOODBYE    

This is going to be my last newsletter for a while because I need to focus on writing my next book (about the fight against money laundering), so I’d like to start by thanking you for reading my weary and cynical thoughts every week, and to apologize for the fact I’m not going to keep sending them out.

Looking back at the last couple of years, I see that one of the key themes that I’ve been banging on about is the question of why Western governments fail to do anything (much) about corruption, despite the clear and obvious evidence that it makes all bad things worse. Is it incompetence — or corruption? Is the problem just too hard for honest people to solve? Are politicians themselves on the take, and thus personally invested in perpetuating the situation?

Or is it both of the above, plus something else entirely? 

I had a meeting recently with a think tank employee who was tasked with coming up with some policy ideas for a senior British politician to announce at a party conference. As you are no doubt aware, Britain has a bit of a dirty money problem, so I was delighted to sit down with them. For anyone who’s read this newsletter before, you’ll have noticed that I regularly talk about the need to adequately resource law enforcement, so that’s what I led with. I described how ordinary police forces can’t investigate fraud because they lack trained officers, and how the national-level agencies fail to prosecute kleptocracy for the same reasons. If the politician wanted Britain to stop being “butler to the world,” what they really needed to do was announce a vast increase in funding and pledge to maintain funding levels for the foreseeable future.

“That’s not going to get them any headlines though, is it?” the think tanker replied. “We need something new.”

I did try to suggest some legal changes, but my heart wasn’t really in it because I’d suddenly spotted what the problem was, and it seemed to have resonance far beyond the U.K.

Our governments are like couch potatoes who are determined to get fit. They are unhealthy, they know it, and they know what the solution is: exercise. In furtherance of that strategy, they buy a treadmill. This gets them a good headline, and they like it. So they buy more fitness equipment: a stationary bicyclea StairMastera rowing machinea pair of running shoes that will improve performanceathleisure wear that wicks away sweatsome of those leg warmers that Jane Fonda wore in her workout videos, and so on. Every time they buy something, they say that it’s proof of their commitment to get fit, and headline writers praise them for it.

But at some point, they’ve got enough fitness equipment. That’s when they need to start exercising, but that’s also when the whole calculation changes. Because exercise is difficult and it’s not going to win any positive coverage. In fact, it could well do the opposite: If enforcement agencies bring the kind of long and complex prosecutions required to combat financial crime, they’re likely to make mistakes, and then the politicians will get criticized, and that’s no fun at all. It’s far safer to announce a new legislative initiative, and leave the sweating to someone else at some point in the ever-receding future.

Is there a word for this? Short-termism isn’t right, but I can’t think of another term for a feedback loop that actively militates against long-term action being taken. I am, however, an optimist (even when pessimists win, they lose, as someone probably once said) and intend to remain one. Financial crime is a tax on our societies, enriching criminals and immiserating everyone else. Corruption is a force multiplier for kleptocrats. Tax evasion is weakening our public services. It is so obvious that tackling these linked curses should be a priority that, at some point, even politicians will realize it.

CRYPTO

Last week, I interviewed journalist Zeke Faux about his new book, “Number Go Up,” which is a very good account of the mirror dimension that is the crypto-verse. He was every bit as amusing as his book, and I recommend it to you. One particularly entertaining point that he made was how when he first pitched the idea of the book to publishers, cryptocurrencies had not yet suffered the so-called crypto winter. As a result, he was relying on the collapse happening while he researched the book. Spoiler: It did.

  • “Faux demonstrates his incisive grasp of the story with the very first words of his prologue: ‘“I’m not going to lie,” Sam Bankman-Fried told me,’ he writes. ‘That was a lie,’” as this entertaining Los Angeles Times review puts it.

It’s always nerve-wracking researching a book about current affairs because of the concern that whatever phenomenon you’re describing will be solved by the time you’ve finished writing it. When I was researching “Moneyland,” I was convinced the problems I’d identified were so pressing that politicians would resolve them long before I made it to print. Funnily enough, Nick Shaxson has told me that he felt the same thing when he wrote his own book about offshore finance — “Treasure Islands” — which was published seven years earlier.

So when I say I’m an optimist, do I mean that I think money laundering will be solved by the time my book is finished and the world will be better? Or do I mean that it won’t and people will therefore want to read my book? Good question. Thanks.

RISKS

What might get in the way of the problem of money laundering being solved? A long answer to that question would take up an entire book (perhaps I should write it), but the short answer is just two words: Donald Trump. That’s not to say progress in tackling the mechanisms of corruption is impossible if Trump is reelected. After all, the Corporate Transparency Act was passed by Congress in January 2021 — although, admittedly, with a veto-proof majority — when he was still president, opening the way for U.S. shell companies to become less opaque.

The significance of his reelection for the global fight against kleptocracy is different: Tackling financial crime will be a complex, laborious, lengthy effort, with multiple countries having to be charmed, cajoled and bullied into taking part. The only country capable of leading that effort is the U.S., and Trump is utterly incapable as both a politician and a human being of making that happen.

The European Union is currently poised halfway between making corporate ownership data public or leaving it private. Any U.S. backtracking would embolden European enemies of the plan, thus fatally weakening attempts to create a global standard.

  • “Thanks to the decades of secrecy that such opaque entities have provided, unscrupulous individuals from across the world were able to find safe haven in the EU – circumventing sanctions, evading accountability and committing further crimes with impunity,” said this open letter to the European Commission from earlier this year.

Fighting financial crime should not be a party-political point, in that all democratic states should be dedicated to keeping their economies and societies free of dirty money. However, there is a world of difference between Trump’s incoherent mess of an approach, and that of Joe Biden’s White House, with its careful anti-corruption strategy.

This thoughtful article from Charles G. Davidson and Ben Judah makes clear how corruption is also a threat to democracy, which depends not just on the system being fair, but also on everyone believing that the system is fair.

  • “Financial secrecy has swollen in recent years as elites have abandoned their duty to pay their fair share. A metastasizing culture of tax avoidance by corporations and the wealthy has weakened national values, institutions, and goals across the West while fueling levels of inequality that wreck national cohesion, drive spiraling resentment, and stoke anger. This is empowering the enemies of democracy at home and abroad,” Davidson and Judah conclude.

Transparency is necessary but not sufficient, and passing laws is not enough, as is evident here in the U.K. An immediate response to the Russian assault on Kyiv last year was the passage of a law making public the owners of shell companies that hold U.K. property, with the aim of ending a loophole long enjoyed by the oligarchs that have invested in London mansions. More than half of such properties in the London borough of Kensington and Chelsea still do not reveal their owners, and there is no sign of enforcement action being taken against them.

  • “There is no point building a dam halfway across a river. These gaps are threatening the efficacy of the entire Register,” said Andy Summers, associate professor at the London School of Economics Law School.

Nowhere, of course, is this of greater significance than in Ukraine, where long-term victory over Russia is a tall order in the best of circumstances. Without ending corruption, it will be all but impossible, not least because corruption allegations would make Western aid harder to justify.

REASONS TO BE OPTIMISTIC

I’ve just been in Texas for a few weeks, reading documents relating to the creation of the Bank Secrecy Act, which was passed in 1970 as the world’s first anti-money laundering legislation. At the time of its passage, Richard Nixon — hardly a paragon of cleanliness in public office — was president. After its passage, banks fought against its implementation all the way to the U.S. Supreme Court. Police agencies lacked enthusiasm for it and took years to actually get around to using it. And yet it survived and went on to become the cornerstone of federal and global attempts to clean up the financial system.

The lesson I took from my days with the boxes upon boxes of papers — among them memos between participants, scribbled notes from members of Congress, transcripts of committee hearings, letters from grateful constituents — is that careful, thorough, well-intentioned efforts change the world. They may not earn headlines like the purchase of a new piece of fitness equipment does, but all they require is for a sufficient number of people to be prepared to put the hours in, and they will come to pass.

And that reminds of what Daria Kaleniuk, the tireless Ukrainian activist, said years ago when I asked her how she kept going in her battle to end corruption despite ceaseless official obstruction (and worse). The aim isn’t to make the world perfect — just to make it better.

  • “I don’t think about ending corruption completely. We are currently at 4% of where I want us to be, and my ambition is to get to 5,” Kaleniuk said.

WHAT I’VE BEEN READING

Speaking of tireless activists, I really enjoyed Naomi Klein’s “Doppelganger,” which is as passionate and thoughtful as you’d expect one of her books to be. It starts from the rather slight observation that she kept being confused with Naomi Wolf, before exploring the weird synthesis between the far right and the New Age left that has taken place since the pandemic. 

Apart from that, I was late to Lea Ypi’s “Free,” but I highly recommend it as a funny, fresh and thoughtful take on politics, growing up and Marxism.

I hope to revive this newsletter when my book is done, but until then, thanks for reading.

The post Why politicians are such couch potatoes when it comes to corruption appeared first on Coda Story.

]]>
The dangerous myths sold by the conspiritualists https://www.codastory.com/waronscience/the-dangerous-myths-sold-by-the-conspiritualists/ Tue, 03 Oct 2023 09:25:38 +0000 https://www.codastory.com/?p=46872 Wellness influencers are repackaging old conspiracy theories and misinformation to peddle products to vulnerable people

The post The dangerous myths sold by the conspiritualists appeared first on Coda Story.

]]>
Patches of pale skin on chiropractor Melissa Sell’s back and shoulders have been turned neon pink by the sun. “This is not a burn,” she tells her nearly 50,000 Instagram followers, “this is light nutrition.” 

The “unhelpful invocation” of the term “sunburn,” she argues, makes “an unconscious mind feel vulnerable and fearful of the sun.” She welcomes this color, insinuating that you should too.

Decades of research have shown that sunburns are strong predictors of melanoma. Roughly 8,000 Americans are expected to die this year from the most serious type of skin cancer, melanoma, according to the American Cancer Society. Skin cancer is the most common form of cancer in the United States, and melanoma rates doubled between 1982 and 2011.

Still, Sell is not alone in the anti-sunscreen camp. Even Stanford University neuroscientist Andrew Huberman, host of the wildly successful podcast “Huberman Lab,” claims that some sunscreens have molecules that can be found in neurons 10 years after application. No evidence is offered. Elsewhere, he has said he’s “as scared of sunscreen as I am of melanoma.” Huberman’s podcasts are frequently ranked among the most popular in the U.S.; he has millions of followers on YouTube and Instagram and has been the subject of admiring magazine profiles.

Spreading misinformation and even conspiracy theories has become commonplace in wellness spaces across social media. In a politically charged atmosphere addicted to brokering in binaries, good science is too often sacrificed at the altar of partisan opinion.

Pushing back against medical advancements from as far back as the 19th century has become a rallying cry for a growing number of today’s conspiritualist contrarians. Fear mongering about vaccinations is not the only entry point to this strange world of conspiracy and misinformation, in which predominantly white, middle- or upper-middle-class wellness influencers propagate and sell ideas and products with little to no oversight. In this world, humans are godlike creatures immune to viruses and cancers, while those who fall victim to illness and therefore the twisted machinations of society are but collateral damage.

In May 2020, I launched the “Conspirituality” podcast with Matthew Remski and Julian Walker. Veteran yoga instructors deeply embedded in the wellness industry, we’ve long been skeptical about many health claims proffered by wellness influencers and the cult-like behaviors that appear in so-called spiritual communities. And we’ve always been attuned to the monetization of health misinformation. 

Conspirituality is a portmanteau of “conspiracy” and “spirituality,” coined in 2011 by Charlotte Ward and David Voas in an academic paper. They observed a strange synthesis between “the female-dominated New Age (with its positive focus on self) and the male-dominated realm of conspiracy theory (with its negative focus on global politics).” The pandemic provided fertile ground for conspirituality, moving it from the fringe to the mainstream.

Specifically, we launched the podcast after the release of the 2020 pseudo-documentary “Plandemic.” Filmmaker Mikki Willis, who had moderate success in the Los Angeles wellness and yoga scene a decade or so ago, found a much larger audience with right-leaning conspiracy theorists — so much so that he was joined by Alex Jones at the red carpet premiere in June this year of the third installment of the “Plandemic” series. Many other former liberals in the wellness space have taken a hard right turn, including comedian and aspiring yogi Russell Brand. Brand now regularly hosts conspiracy theorists in part of what these days appears to be a gambit to deflect against numerous sexual abuse allegations against him made public earlier this month. 

Not all conspiritualists are hard right, though their rhetoric predominantly leans that way. One of America’s most infamous anti-vaxxers, Robert F. Kennedy Jr., for instance, is attempting to combat President Joe Biden in the Democratic Party presidential primaries from the left. Predictably, Kennedy’s health policy roundtable, held on June 27, featured other leading health misinformation spreaders. 

While the anti-vaccination movement began the moment Edward Jenner codified vaccine science, the modern upswell of anti-vax fervor dates back to disbarred physician Andrew Wakefield’s falsified research that purported to link vaccinations to autism in 1998. Hysteria around COVID-19 vaccines began months before a single one hit the market, in large part thanks to misinformation spread by “Plandemic.” And that trend shows no sign of slowing.

Health misinformation is likely as old as consciousness. The learning curve in understanding which plants heal and which kill took millennia without the benefit of controlled environments. While no science is perfect, to deny or disavow the progress we’ve made is absurd. The 19th century was an especially fruitful time, with vaccinations, antibiotics, germ theory and handwashing greatly advancing our biological knowledge.

Germ theory is a foundational tenet of modern science. For centuries, miasma theory was the favored explanation for the Black Plague, cholera and even chlamydia. These diseases were supposedly the result of “bad air,” which the ancient Greek physician Hippocrates claimed originated from rotting organic material and standing water. 

The English physician John Snow, famous for tracing the source of an 1854 cholera outbreak in London to a water pump in the city.

In 1857, English physician John Snow submitted a paper tracing a cholera outbreak to contaminated water from a pump in London’s Broad Street. Adoption of sanitary measures was slow and grudging. Civic authorities weren’t interested in the expense of rerouting pipelines.

A few years later, French chemist Louis Pasteur discovered a pathology of puerperal fever, though it wasn’t until Robert Koch photographed the anthrax bacterium in 1877 that disease was undeniably linked to bacteria. Medicine was changed.

Contemporary contrarian wellness influencers also trace their antecedents back to the 19th century. While Pasteur won fame — pasteurization remains an important practice for killing microbes — some of his colleagues resisted his findings. French scientist Antoine Béchamp devised the pleomorphic theory of disease: It’s not that bacteria or viruses cause diseases; it’s just that they’re attracted to people already susceptible to those diseases. 

As Pasteur and Koch continued their research on microorganisms, Béchamp faded into obscurity. But his “terrain theory” lingered. It was the harbinger of the infamous “law of attraction,” the belief in the power of manifestation, of effectively imagining wealth, health and success into being. It’s the school of thought that, repackaged, made books like Rhonda Byrne’s “The Secret” (2006) a global bestseller. 

Extended to physical wellbeing, it means that if your mindset is “correct,” disease has no pathway into your body. This ideology is behind the many products and courses sold by wellness influencers. In 2017, pseudoscience clearing house GreenMedInfo published an article in which the writer described Pasteur as the “original scammer” who enabled “the pharmaceutical industry to dominate and tyrannically rule modern Western medicine.” If you can sell the public on a pathology of disease, the writer argued, you can sell a cure. 

He championed a return to nature as the real way to protect against disease: “Detoxing and seeking fresh whole foods and adding the proper supplements offer more disease protection from germs than all the vaccines in the world.”

Louis Pasteur in his laboratory. The French 19th century microbiologist was a pioneer of germ theory and vaccination. Unknown Author/Britannica Kids.

Terrain theory has no greater proponent than Zach Bush, a physician who rightfully argues that the environment plays a role in health outcomes. But then he goes on to say that since there are billions of viruses, it must really be unhealthy tissues making the victim susceptible to disease — Antoine Béchamp’s exact argument. Bush claims that viruses are nature’s way of upgrading our genes, and any ailment must be due to a bodily imbalance.

This form of magical thinking is spread across his many web pages. Instead of conducting actual research on COVID-19 as an internist, Bush offered statements like this to his million-plus followers: “May this respiratory virus that now shares space and time with us teach us of the grave mistakes we have made in disconnecting from our nature and warring against the foundation of the microbiome. If we choose to learn from, rather than fear, this virus, it can reveal the source of our chronic disease epidemics that are the real threat to our species.”

In April, Bush told an Irish podcast that if he were to take a single course of antibiotics, his chances of “major depression over the next 12 months goes up by 24 percent.” Two courses, and he claimed that he would be 45% more likely to contract anxiety disorders and 52% more likely to suffer depression. The podcast’s hosts made a public apology, though Bush continues to be able to spread his misinformation. Inevitably, Bush sells a range of supplements “key to our overall health and wellbeing.” 

Watch what they say, then watch what they sell. If an influencer tells you Western medicine has failed you, be sure a product pitch is coming. Supplements are the main vehicle to monetization for wellness influencers since they don’t have to be clinically tested and little regulated, existing in a medical gray zone. Consumers mostly ignore the fine print on the back label because the promises on the front are so much more appealing.

Like Bush, influencers such as Jessica Peatross sell supplements and protocols to her well over 300,000 Instagram followers while consistently invoking Béchamp. “Terrain theory matters,” Peatross wrote in a March 2023 post. “When your body’s symphony isn’t in tune, or you are out of homeostasis, you are much more vulnerable to pathogenic invasion, cancer or autoimmunity.”

Last year, Peatross surrendered her medical license in California due to vaccine requirements. Now she sells subscription health plans. When signing up for her email list, you get a link to download her “Vaccine Protection & Detox Protocol.” 

All proponents of terrain theory put the onus of disease on the individual. They demand we each fend off the toxic effects of Big Pharma, Big Ag and all the other Bigs in existence through supplementation, meditation, breathwork, psychedelic rituals in Bali, or simply by thinking positively, thinking the “right way,” a learned skill for which they always have a course. 

Among the more notorious pushers of terrain theory doctrine was German physician Ryke Geerd Hamer, the inventor of Germanic New Medicine. In 1995, already discredited and stopped from practicing medicine in Germany, he diagnosed a 6-year-old girl as having “conflicts.” As a result, her parents refused to treat the 9-pound cancerous tumor in her abdomen. An Austrian court stripped them of custody, so that she could receive the chemotherapy that saved her life. 

Hamer, who died in 2017, believed medicine was controlled by Jewish doctors who used treatments like chemotherapy on non-Jewish patients. Perhaps unsurprisingly, many pseudoscience claims and conspiracies are rooted in antisemitism. Hamer also promoted the idea of microchips in swine flu vaccines and denied the existence of AIDS.

Discredited German doctor Ryke Geerd Hamer (r) on trial in 1997 in the Cologne district court. Hamer, who died in 2017, believed chemotherapy was part of a Jewish conspiracy to destroy Western civilization. Roland Scheidemann/picture alliance via Getty Images.

Germanic New Medicine is based on the “five biological laws,” which claim that all severe disease is due to a shock event. If the victim doesn’t immediately solve their conflict, the disease progresses in the brain. Microbes actually enter the body to heal it, provided the victim addresses the psychological conflict that led to the proliferation of the disease state. The victim heals after confronting the conflict, which Hamer thought nature had intentionally placed there to teach some sort of lesson. Death only occurs when you don’t face the trauma of the shock event. So that’s on you.

Disease exists to teach a lesson. A sunburn is light nutrition. It’s no wonder that Melissa Sell is one of the most vocal revivalists of Hamer’s theories, which she has renamed “Germanic Healing Knowledge.” She uses social media to share thoughts like: “You are not ‘sick’. Your body is adapting to help you through a difficult situation. When you resolve that situation your body will go through a period of restoration and then return to homeostasis.” 

Sadly, this is par for the course. With my podcast colleagues, Matthew and Julian, our review of conspiritualists found that the notion of an “ideal” body or way of being is widespread. As we document in our book, modern yoga was in part influenced by the famed 19th- and early 20th-century German strongman Eugen Sandow, whose adopted first name is a truncation of “eugenics.” 

Yoga originated in India, yet Sandow’s techniques found an audience among Indians in the late 19th century. Feeling emasculated and humiliated by British colonialists, many Indians appreciated Sandow’s overt masculinity and mimicked his strength techniques in a set of yoga postures that are now widely used. Indians craved bodily strength as a metaphor for overcoming colonial rule. Sandow came at it from the other side. He used his physique to further an explicitly racist world view. There was a reason why the strong white race dominated the world, he seemed to be saying — just watch me flex my biceps.

Wellness influencers similarly obsess over a strong and purified body. They assign similar causes to all ailments, which usually include poor diet, a lack of exercise, modern medicine and an inability to escape toxic stress. Sometimes, however, the influencer assigns physical attributes to the perfected body, which is why anti-trans bigotry and fat-shaming run rampant in wellness spaces. The ideal body, which can only be accomplished by resisting the evil mechanisms of allopathic (Western) medicine, is the true goal of nature’s design. Strangely, a number of these same influencers take no issue with cosmetic surgeries, botox or steroids, yet scream at followers for using deodorant or applying sunscreen. 

So what is the “right” sort of existence that lets the victim recover and achieve homeostasis, a state of internal balance consistent with Hamer’s five biological laws? According to Sell, as she explained on X, formerly known as Twitter, “The way to feel better is to think better thoughts.” Naturally, she has a number of online courses available to help you think better thoughts, ranging in price from $111 to $2,700.

Eugen Sandow, the strong man, in weight-lifting act, circa 1895. Getty Images.

In 1810, German physician Samuel Hahnemann came up with the term “allopathy” as a strawman to his concept of homeopathy. Whereas homeopathy means “like cures like,” allopathy initially meant “opposite cures like.” In the allopathic system, for instance, you take an antidiarrheal to treat diarrhea; in homeopathy, you take a laxative. Well, the “essence” of a laxative. 

Allopathy has come to mean anything involving Western medicine, while homeopathy is considered a natural system for healing (even though ground-up pieces of the Berlin Wall are used in one homeopathic remedy, and I don’t recall concrete ever forming without human intervention).

Hahnemann left his role as a physician in 1784 due to barbaric practices like bloodletting. He supported his family by translating medical textbooks. Inspired by Scottish physician William Cullen’s book on malaria, he slathered cinchona — a quinine-containing bark — all over his body to induce malaria-like symptoms. Hahnemann likely developed an inflammatory reaction, though he credited them as “malaria-like symptoms.” He then believed himself to be inoculated against malaria. This experience became the basis of homeopathy.

Instead of ingesting (or slathering on) small quantities of an offending agent, Hahnemann removed the active ingredient altogether from his distillations. He believed that less substance equals higher potency, and kept following that trail: Most homeopathic products contain no active ingredient.

Take Oscillococcinum, one of France’s top-selling medicines, which rakes in $20 million in America every year. The process of potentization — homeopathy’s dilution protocol — begins with the heart and liver of the Muscovy duck. Technicians mix 1 part duck heart and liver with 100 parts sugar in water. Then the process is repeated 200 times, which means any trace of the duck is long gone. The late family physician Harriet Hall pointed out that you’d need a container 30 times the size of the earth just to find one duck molecule. Yet it’s marketed to reduce flu symptoms. 

When a spokesperson for Boiron, the manufacturer of Oscillococcinum, was asked if their product was safe, she replied: “Of course it is safe. There’s nothing in it.”

Despite an absence of active ingredients, homeopathic products are often mistaken for herbal remedies, according to Jonathan Jarry, a science communicator with the Office for Science and Society at McGill University. In his article, Jarry cites a Health Canada survey that shows only 5% of Canadians understand what homeopathy entails. Pharmacies and grocery stores confuse customers by shelving these products next to herbal remedies and other medicines.

When I asked Jarry about the danger of consumer confusion, he said, “Homeopathic products are based on sympathetic magic principles and are not supported by our understanding of biology, chemistry and physics. So when they’re sold alongside actual pharmaceutical drugs, it creates a false equivalence in the mind of the shopper. It bumps homeopathy up to the level of medicine and turns its products into pharmaceutical chameleons.”

Homeopathy suppliers want it both ways: They claim their products are superior to pharmaceuticals while pushing to have them shelved next to actual drugs to obscure their difference. The name of their 100-year-old trade group? The American Association of Homeopathic Pharmacists.

Jarry has helped lead the charge for proper labeling of homeopathic products in Canada. Over the border, in the U.S., the Federal Trade Commission began regulating homeopathic products in 2016, though these efforts seem to have had little impact. The global homeopathic market is expected to reach nearly $20 billion by 2030.

Jarry thinks regulatory agencies must work harder to make clear that homeopathy is not based on science. But everyone passes the buck. “The pharmacists who own drug stores in which homeopathy is sold,” Jarry told me, “say that it’s up to the chain they work for to tell them to stop selling these products.” Meanwhile, “the chains say the products are approved by Health Canada, whose representatives say it’s up to pharmacists to use clinical judgment when recommending them or not.”

While the risk of injury is low given that most homeopathic products contain no active ingredient, there’s another danger lurking beneath the surface — people choosing to use these products instead of seeking out interventions that can actually help them. 

Avoidance of “allopathic” medicine is common in wellness spaces, the belief being that natural cures are better than anything concocted in a laboratory. The stakes are particularly high when it comes to mental health.

We’ve included a chapter called “Conspiritualists Are Not Wrong” in our book to acknowledge the fact that many people turn to natural remedies and wellness practices with good intentions. The American for-profit healthcare system can be a nightmare. We likely all have anecdotes of when the system failed us. Just as we all have likely benefited from Western medicine. It often depends on where your attention is most drawn. 

Like many wellness professionals, I lost a lot of income when the pandemic struck. All of the group fitness and yoga classes that I ran were gone overnight. My wife, who worked in hospitality at the time, lost her job. We were fortunate to have enough savings to get by, along with whatever income I pulled together as a freelance writer and by livestreaming donation yoga classes on YouTube. Our story isn’t unique, and it makes sense that wellness professionals turned to whatever revenue they could find. 

I wasn’t surprised to see so many supplements and online courses being marketed in the first months of the pandemic. But the sheer number of mental health interventions sold by wellness influencers was astounding — and concerning. Everyone seemed to have a hot take on mental health, and many leaned on the appeal to nature fallacy: You can heal depression with a supplement or a meditation practice or by cultivating the right mindset. 

“Holistic psychiatrist” Kelly Brogan, who is clinically trained but took a right turn even before the pandemic began, offers tapering protocols from antidepressants — even though none exist — to paying clients. True, pharmaceutical companies that know how to get patients onto their medications have never bothered to figure out how to get them off. But beware the influencer who writes, as Brogan does, “Tapering off psychiatric medication is a soul calling. It is a choice that you feel magnetized toward and will stop at nothing to pursue.”

Jonathan N. Stea is a clinical psychologist and adjunct assistant professor at the University of Calgary. A prolific science communicator, he doesn’t mince words when I ask him about wellness influencers who claim that natural remedies are better than antidepressants. 

I’m tired of wellness influencers unethically opining on topics they’re unqualified to understand,” he said. “Notwithstanding the appeal to nature fallacy with respect to the idea that there are ‘better natural remedies’ than evidence-based psychiatric medications, it’s irresponsible to make such claims in the absence of scientific evidence.”

The paradox of the wellness industry is that you supposedly thrive when you connect with nature, yet you also need endless products and services. Self-professed metaphysics teacher Luke Storey, for example, sells over 200 products that offer the “most cutting-edge natural healing” that jive with his love for “consciousness expanding technologies.” How much healing does one really need? How contracted is consciousness that it requires so much expansion? 

It’s one thing to enjoy spiritual tchotchkes, but telling people these accouterments are necessary for salvation is disingenuous.

The problem is that people don’t necessarily feel better with these protocols or products. The way the wellness grift is framed — the notion that your thoughts dictate your reality — results in the adherent feeling worse if the therapeutic doesn’t work. They believe it’s a moral failing because charismatic influencers place the burden on them: “You didn’t do x or y hard enough.” So back on the treadmill they go.

Tragically, Stea said some people suspend antidepressant usage to chase magical-sounding cures. “Abrupt cessation of these medications can result in awful withdrawal symptoms,” he told me. “The other risk is that forgoing medications for unsupported or pseudoscientific treatments carry their own potential for harm, either directly due to the treatment, or indirectly by possibly worsening an untreated mental disorder.”


Roger Ressmeyer/Corbis/VCG via Getty Images.

People in pain are vulnerable. Unfortunately, there’s no silver bullet for depression, anxiety or suicidal ideation. At least accountability exists in regulated spaces. Pseudoscientific sermons on TikTok have no such oversight.

Ideally, science tests claims with the best available means at the time. If better tools emerge, findings are updated. Conspiritualists are regressing in this regard. Their romanticization of 19th-century pseudoscience is a ruse that helps them sell products and services. 

In many ways, we’re victims of our own success. The advancements of the 19th century in public health, hygiene and drugs are part of the reason most of us are here today. Like the proverbial fish that doesn’t know it’s swimming in water, we’re all afloat in the hard-won progress of centuries of trial and error. 

We’re also not the same animals that gave birth to our line 100,000 years ago or even 1,000 years ago. For better and worse, we’ve drastically changed our relationship to our environment, just as we have drastically changed the environment. Glamorizing who we were neglects what we’ve become and how we got here. 

Michelle Wong, a science educator and cosmetic chemist based in Australia, told me that when the likes of Melissa Sell make their anti-sunscreen pitches, they rely on the appeal to nature fallacy. “There’s the idea that humans evolved with sun exposure,” she said, “so our skin should be able to handle it. But skin cancers usually develop after reproductive age (which is all that evolution helps us with). On top of that, migration and leisure, like beach holidays, means we get very different sun exposure compared to how we evolved.” As the 16th-century Swiss physician Paracelsus once observed, what heals in small doses kills in large.

The sun, in other words, isn’t to be feared, but we would do well to respect its power. And to not overestimate our own.

The post The dangerous myths sold by the conspiritualists appeared first on Coda Story.

]]>
Meta cozies up to Vietnam, censorship demands and all https://www.codastory.com/authoritarian-tech/vietnam-censorship-facebook/ Thu, 28 Sep 2023 15:25:58 +0000 https://www.codastory.com/?p=46764 U.S. social media companies have become indispensable partners in Vietnam's information control regime

The post Meta cozies up to Vietnam, censorship demands and all appeared first on Coda Story.

]]>
When Vietnamese Prime Minister Pham Minh Chinh and his delegation visited Meta’s Menlo Park headquarters in California last week, they were welcomed with a board reminiscent of Facebook’s desktop interface.

“What’s on your mind?” it read at the top. Beneath the standard status update prompt were a series of messages written in Vietnamese that extended a warm welcome to the prime minister, underscoring the collaboration between his government and the social media giant. Sunny statements are reported to have dominated the meeting in which the two sides rhapsodized about bolstering their partnership.

Prime Minister Chinh highlighted the instrumental role American companies, Meta in particular, might play in uncorking the potentials of the Comprehensive Strategic Partnership that the U.S. and Vietnam cemented in mid-September. He encouraged Meta to deepen its ties with Vietnamese firms to boost the digital economy. Joel Kaplan, Meta’s vice president for U.S. public policy, indicated willingness to support Vietnamese businesses of all sizes, adding that the company hopes to continue producing “metaverse equipment” in the country. 

The warm aura of the meeting obscured an uncomfortable reality for Meta on the other side of the Pacific: It has become increasingly enmeshed in the Vietnamese government’s draconian online censorship regime. In a country whose leaders once frowned upon it, Facebook has seen its relationship with the Vietnamese government morph from one of animosity to an unlikely alliance of convenience. Not a small feat for the social media giant.

Facebook has long been the most popular social media platform in Vietnam. Today, over 70% of Vietnam’s total population of nearly 100 million people use it for content sharing, business operations and messaging.

For years, Facebook’s approach to content policy in Vietnam appeared to be one of caution, in which the company brought some adherence to free speech principles to decision-making when it was faced with censorship demands from the government. But in 2020, it shifted to one of near-guaranteed compliance with official demands, at least in the eyes of Vietnamese authorities. It was in that year that the Vietnamese government claimed that the company went from approving 70 to 75%% of censorship requests from the authorities, to a staggering 95%. Since then Vietnamese officials have maintained that Facebook’s compliance rate is upwards of 90%.

Meta’s deference to Vietnam’s official line continues today. Last June, an article in the Washington Post quoted two former employees who, speaking on the condition of anonymity, said that Facebook had taken on an internal list of Vietnam Communist Party officials who it agreed to shield from criticism on its platform. The undisclosed list is included in the company’s internal guidelines for moderating online content, with Vietnamese authorities having a significant sway on it, the Post reported. While the Post did not cite the names of the Vietnamese officials on the list, it noted that Vietnam is the only country in East Asia for which Facebook provides this type of white-glove treatment.

Also in June, the government instructed cross-border social platforms to employ artificial intelligence models capable of automatically detecting and removing “toxic” content. A month earlier, in the name of curbing online scams, the authorities said they were gearing up to enforce a requirement that all social media users, whether on local or foreign platforms, verify their identities.

These back-to-back developments are emblematic of the Vietnamese government’s growing confidence in asserting its authority over Big Tech.

Facebook’s corporate headquarters location in Menlo Park, California. Josh Edelson/AFP via Getty Images.

How has Vietnam reached this critical juncture? Two key factors seem to account for why Vietnamese authorities are able to boss around Big Tech.

The first is Vietnam’s economic lure. Vietnam’s internet economy is one of the most rapidly expanding markets in Southeast Asia. According to a report by Google and Singapore’s Temasek Holdings, Vietnam’s digital economy hit $23 billion in 2022 and is projected to reach approximately $50 billion by 2025, with growth fueled primarily by a thriving e-commerce sector. 

Dangling access to a market of nearly 100 million people, Vietnamese authorities have become increasingly adept at exploiting their economic leverage to browbeat Big Tech companies into compliance. Facebook’s 70 million users aside, DataReportal estimates that YouTube has 63 million users and TikTok around 50 million in Vietnam.

Although free speech principles were foundational for major American social media platforms, it may be naive to expect them to adhere to any express ideological value proposition at this stage. Above all else, they prioritize rapid growth, outpacing competitors and solidifying their foothold in online communication and commerce. At the end of the day, it is the companies’ bottom line that has dictated how Big Tech operates across borders.

Alongside market pressures, Vietnam has also gained leverage through its own legal framework. Big Tech companies have recognized that they need to adhere to local laws in the countries where they operate, and the Vietnamese government has capitalized on this, amping up its legal arsenal to tighten its grip on cyberspace, knowing full well that Facebook, along with YouTube and TikTok, will comply. Nowhere is this tactic more manifest than in the crackdown on what the authorities label as anti-state content. 

Over the past two decades, the crackdown on anti-state content has shaped the way Vietnamese authorities deployed various online censorship strategies, while also dictating how a raft of laws and regulations on internet controls were formulated and enforced. From Hanoi’s perspective, anti-state content can undermine national prestige, besmirch the reputation of the ruling Communist Party and slander and defame Vietnamese leaders.

There is one other major benefit that the government derives from the big platforms: it uses them to promote its own image. Like China, Vietnam has since 2017 deployed a 10,000-strong military cyber unit tasked to manipulate online discourse to enforce the Communist Party’s line. The modus operandi of Vietnam’s cyber troops has been to ensure “a healthy cyberspace” and protect the regime from “wrong,” “distorting,” or “false news,” all of which are in essence “anti-state” content in the view of the authorities.

And the biggest companies now readily comply. A majority of online posts that YouTube and Facebook have restricted or removed at the behest of Vietnamese authorities were related to  “government criticism” or ones that “oppose the Communist Party and the Government of Vietnam,” according to the transparency reports by Google and Facebook.

The latest data disclosed by Vietnam’s Ministry of Information and Communications indicates that censorship compliance rates by Facebook and YouTube both exceed 90%.

In this context, Southeast Asia provides a compelling case study. Notably, four of the 10 countries with the highest number of Facebook users worldwide are also in Southeast Asia: Indonesia, the Philippines, Vietnam and Thailand. Across the region, censorship requests have pervaded the social media landscape and redefined Big Tech-government relations. 

“Several governments in the region have onerous regulation that compels digital platforms to adhere to strict rules over what content is or isn’t allowed to be on the platform,” Kian Vesteinsson, an expert on technology and democracy at Freedom House, told me. “Companies that don’t comply with these rules may risk fines, criminal or civil liability, or even outright bans or blocks,” Vesteinsson said.

But a wholesale ban on any of the biggest social platforms feels highly improbable today. These companies have become indispensable partners in Vietnam’s online censorship regime, to the point that the threat of shutting them down is more of a brinkmanship tactic than a realistic option. In other words, they are too important to Vietnam to be shut down. And the entanglement goes both ways — for Facebook and Google, the Vietnamese market is too lucrative for them to back out or resist censorship demands.

To wit, after Vietnam threatened to block Facebook in 2020 over anti-government posts, the threat never materialized. And Facebook has largely met the demands of Vietnamese authorities ever since.

Last May, TikTok faced a similar threat. Vietnam launched a probe into TikTok’s operations in Vietnam, warning that any failure to comply with Vietnamese regulations could see the platform shown the door in this lucrative market. While the outcome of the inspection is pending and could be released any time, there are already signs that TikTok, the only foreign social media platform to have set up shop in Vietnam, will do whatever it takes to get on the good side of Vietnamese authorities. In June, TikTok admitted to its wrongdoings in Vietnam and pledged to take corrective actions.

The fuss that Vietnamese authorities have made about both Facebook and TikTok has likely masked their real intent: to further strong-arm these platforms into becoming more compliant and answerable to Vietnamese censors. Judging by their playbook, Vietnamese authorities are likely to continue wielding the stick of shutdown as a pretext to tighten the grip on narratives online, fortify state controls on social media and solidify the government’s increasing leverage over Big Tech.

Could a different kind of platform emerge in this milieu? Vietnam’s economy of scale would scarcely allow for this kind of development: The prospect of building a more robust domestic internet ecosystem that could elbow out Facebook or YouTube doesn’t really exist. Absent bigger political and economic changes, Hanoi will remain reliant on foreign tech platforms to curb dissent, gauge public sentiment, discover corrupt behavior by local officials and get out its own messages to its internet-savvy population.

The post Meta cozies up to Vietnam, censorship demands and all appeared first on Coda Story.

]]>
Why are AI software makers lobbying for kids’ online safety laws? https://www.codastory.com/newsletters/why-are-ai-software-makers-lobbying-for-kids-online-safety-laws/ Thu, 28 Sep 2023 14:44:27 +0000 https://www.codastory.com/?p=46737 THINK OF THE CHILDREN Last week, the U.K. passed the Online Safety Bill, a law that’s meant to help snuff out child sexual exploitation and abuse on the internet. The law will require websites and services to scan and somehow remove all content that could be harmful to kids before it appears online.  This could

The post Why are AI software makers lobbying for kids’ online safety laws? appeared first on Coda Story.

]]>
THINK OF THE CHILDREN

Last week, the U.K. passed the Online Safety Bill, a law that’s meant to help snuff out child sexual exploitation and abuse on the internet. The law will require websites and services to scan and somehow remove all content that could be harmful to kids before it appears online. 

This could fundamentally change the rules of the game not only for big social media sites but also for any platform that offers messaging services. A provision within the law requires companies to develop technology that enables them to scan encrypted messages, thus effectively banning end-to-end encryption. There is powerful backing for similar laws to be passed in both the U.S. and the European Union.

Scouring the web in an effort to protect children from the worst kinds of abuse sounds like a noble endeavor. But practically speaking, this means the state would be surveilling literally everything we write or post, whether on a public forum or in a private message. If you don’t already have a snoopy government on your hands, a law like this could put you just one election away from a true mass surveillance regime of unprecedented scale. Surely, there are other ways to keep kids safe that won’t be quite so detrimental to democracy.

As a parent of two tiny children, I feel a little twinge when I criticize these kinds of laws. Maybe the internet really is rife with content that is harmful to children. Maybe we should be making these tradeoffs after all. But is kids’ safety really what’s driving the incredibly powerful lobbying groups that somehow have a seat at every table that matters on this issue, from London to D.C. to Brussels?

It is not. This week, Balkan Insight dropped a deeply reported follow-the-money investigation of the network of lobbying groups pushing for this kind of “safety” legislation in Europe that made a connection that really ought to be on everyone’s radar: The AI industry is a major lobbying force driving these laws.

The piece takes a hard look at Thorn, a U.S. organization that has been a vocal advocate for children’s online safety but that has also developed proprietary AI software that scans for child abuse images. Thorn seems to be advocating for companies to scan every drop of data that passes through their servers with one hand and then offering the perfect technical tool for said scanning with the other. It’s quite the scheme. And it seems to be working so far — the U.K. law is a done deal, and talks are moving fast in the right direction for Thorn in Europe and the U.S. Oh, and the U.S. Department of Homeland Security is already among Thorn’s current clients.

As a number of sources quoted in the Balkan Insight investigation point out, these laws might not even be the best way to tackle child exploitation online. They will require tech companies to break encryption across the internet, leaving people vulnerable to all kinds of abuse, child exploitation included. This level of surveillance will probably send the worst predators into deeper, darker corners of the web, making them even harder to track down. And trying to scan everything is often not the best way to trace the activities of criminal groups. 

I’m sure that some of the people pushing for these laws care deeply about protecting kids and believe that they are doing the best possible thing to make them safer. But plenty of them are driven by profit. That is something to worry about.

GLOBAL NEWS

The internet was barely accessible last week in the disputed territory of Nagorno-Karabakh, where Azerbaijani military troops have effectively claimed control of the predominantly ethnic Armenian region. Tens of thousands of Karabakhi Armenians are fleeing the mountainous region that abuts the Azerbaijani-Armenian border in what one European MEP described as a “continuation of the Armenian genocide.” The role of Russia in the conflict (which amid the war in Ukraine seems to have withdrawn its long-time support for the Armenian side) and the importance of Azerbaijan to Europe as a major oil producer have dominated most of the international coverage. But the situation for people in the region is dire and has largely been ignored. The lack of basic digital connectivity isn’t helping — researchers at NetBlocks showed last Thursday that Karabakh Telecom had almost no connectivity from September 19, when the full military offensive launched, until September 21, when Armenian separatist fighters surrendered. TikTok was also blocked during this period. 

Azerbaijani authorities are also taking measures to ensure that their critics keep quiet online. Several Azerbaijani activists and journalists who have posted critical messages or coverage of the war on social media have been arrested for posting “prohibited” content.

An internet blackout has also gone back into effect in Manipur, India, just days after services were restored. An internet blackout has been in effect in Manipur since the beginning of May, as nearly 200 people have died in still ongoing ethnic violence. This blackout was finally lifted last weekend. But protests in Imphal, the capital city of the northeastern state that borders Myanmar, erupted this week after photos of the slain bodies of two students who had gone missing in July surfaced and went viral on social media. Now the Manipur government, which has largely failed to contain the violence, even as its critics accuse it of fomenting clashes, has said disinformation, rumors and calls for violence are being spread online, necessitating another shutdown. An order from the state governor’s office, which has been circulating on X, says the shutdown will last for another five days. Indian authorities frequently shut down the internet in embattled states, despite the cost to the economy — an estimated $1.9 billion in the first half of this year alone — and the apparent lack of effect on public safety.

Speaking of shutdowns, there’s new hope that Amazon might have to shutter some part of its business or at least clean up its practices. This week, the U.S. Federal Trade Commission, alongside 17 state attorneys general, filed a massive lawsuit accusing the e-commerce behemoth of inflating prices, degrading product quality, and stifling innovation. These practices hurt both consumers and third-party sellers, says the FTC, who have little choice but to sell their goods on Amazon’s platform. This is a bread-and-butter anti-monopoly case — it doesn’t rely on the pioneering legal theories the FTC Chair Lina Khan is known for. In legal scholar and former White House tech advisor Tim Wu’s view, “The FTC complaint against Amazon shows how much, over the last 15 years, Silicon Valley has understood and used scale as a weapon. In other words, the new economy relied on the oldest strategy in the playbook — deny scale to opponents.” I couldn’t agree more.

The post Why are AI software makers lobbying for kids’ online safety laws? appeared first on Coda Story.

]]>
For Arab dissidents, the walls are closing in https://www.codastory.com/authoritarian-tech/arab-dissidents-extradition/ Wed, 27 Sep 2023 13:30:14 +0000 https://www.codastory.com/?p=46595 The Arab League is relying on the little-known Arab Interior Ministers Council to target critics abroad. Now, a former detainee is taking them to court in the U.S.

The post For Arab dissidents, the walls are closing in appeared first on Coda Story.

]]>
In November 2022, Sherif Osman was having lunch with his fiancee, his sister and other family members at a glittering upscale restaurant in Dubai. A former military officer in Egypt and now a U.S. citizen, Osman had traveled to Dubai with his fiancee, Virta, so his family could meet her for the first time.

Toward the end of the meal, Osman got up and said to Virta, “Go ahead and finish up, I’ll go vape outside.” He kissed her on the forehead and walked out the door. 

When Virta came out of the restaurant a few minutes later, she saw Osman talking to two men. Initially, she thought they were talking about parking spots. Then one of them grabbed his arm and started dragging him into a car.

Virta tried to get to Osman but the car sped away, leaving her standing on the side of the road with his family.

Virta, who is originally from Finland, knew that Osman had been making YouTube videos about human rights violations in Egypt, but it was a part of his life she knew little about. Osman left Egypt in 2004 after becoming frustrated with the corruption he witnessed within the government while serving as an air force captain. He is now considered a deserter. Two years after leaving his home country, he set up a YouTube channel, @SherifOsmanClub, where he routinely criticized the Egyptian government. Today, the channel has more than 40,000 subscribers. 

A few weeks before traveling to Dubai, Osman had posted a video calling for Egyptians to capitalize on COP27, the United Nations climate conference due to be held that month in Sharm El-Sheikh, to protest the state’s dismal human rights record and the rising cost of living.

In the car, Osman’s mind was spinning. When they approached a turn on the highway that leads to the international airport he began to panic, fearful that he was on a one-way trip to his grave.

“I have seen very, very, very high-ranking Egyptians that have lived in Dubai and opened their mouths with a different narrative on Egypt, and they were actually put on a flight and shipped out to Egypt,” he said, referring to former Egyptian prime minister Ahmed Shafiq, who was deported from the UAE just days after he announced he was running for president in 2017.

Osman soon realized that he was being taken to the Dubai police headquarters.

Dubai’s central prison where Sherif Osman was detained. Giuseppe Cacace/AFP via Getty Images.

He was escorted through the back entrance of the building. Osman waited for hours while officers moved frantically around the room, giving him no information. When he asked for clarity, they told him to wait and promised to bring him coffee.

“They actually made me coffee,” he told me, laughing. Osman’s sardonic sense of humor comes out in full force when he recounts the ordeal.

Osman was eventually taken from police headquarters to the Dubai Central Prison where he was made to wait while the authorities decided if he would be deported to Egypt. On November 15, Charles McClellan, an officer in the U.S. Consulate in Dubai, told Virta that Interpol had issued a red notice and extradition case number for Osman.

A few days later, Virta sent an email to Radha Stirling in Windsor, a town in southeast England, pleading for assistance. “Sherif’s deportation to Egypt is a death penalty without a fair trial!” Virta wrote.

Stirling, the CEO of an organization called Detained in Dubai, was no stranger to these kinds of cases. Knowing that the United Arab Emirates could extradite a U.S. citizen to Egypt in the dark of night, Stirling acted quickly. She contacted the American embassy to offer advice, tried to rally support from U.S. politicians and sought media coverage of the case.

And then something strange happened. McClellan told Stirling that he’d gotten new information: According to the UAE, Osman was detained on a “red notice” issued by a less well-known organization: the Arab Interior Ministers Council. An Emirati official speaking to The Guardian confirmed the same.

When Osman learned it was not Interpol but rather the Arab Interior Ministers Council pursuing the case, his heart sank. “That’s when I was like, I’m fucked,” he told me.

The Arab League meeting in Cairo on May 7, 2023. Khaled Desouki/AFP via Getty Images.

A body made up of the interior ministries of all 22 Arab League states, the Arab Interior Ministers Council was established in the 1980s to strengthen cooperation between Arab states on internal security and combating crime. In recent years, it has played an increasingly visible role in extradition cases between Arab countries, particularly in cases that appear to be politically-motivated.

Experts I spoke with say that the shift has occurred as some of the Council’s member states, including the UAE and Egypt, have become notorious for abusing Interpol’s system. Although it is often portrayed in the media as an international police force with armed agents and the power to investigate crimes, Interpol is best understood as an electronic bulletin board where states can post “wanted” notices and other information about suspected criminals. Arab League states are increasingly posting red notices via Interpol in an effort to target political opponents, despite Interpol rules expressly prohibiting the practice.

Ted Bromund, a senior research fellow at the Heritage Foundation, thinks tensions surrounding Interpol may be driving increased cooperation within the Council, especially in politically-motivated cases. “My suspicion is that this Arab Ministers Council is basically a reaction to the fact that Interpol is maybe not quite as compliant or as lax as they used to be,” Bromund told me.

It was around 2018, shortly after Washington Post columnist Jamal Khashoggi, a Saudi-born U.S. resident, was murdered in the Saudi Arabian consulate in Turkey, that Abdelrahman Ayyash first heard of the Council. Ayyash is a case manager at the Freedom Initiative, which advocates for people wrongfully detained in the Middle East and North Africa.

Ayyash told me that over the past year he has identified at least nine cases in which the Council was likely involved in the extradition or arrest of political dissidents, with some of them dating as far back as 2016. In one case, Kuwait extradited eight Egyptians to Cairo in 2019 following accusations that they were part of a terrorist cell with links to the Muslim Brotherhood. Ayyash suspects their arrest and deportation stemmed from a notice from the Arab Interior Ministers Council.

In a case highlighted by other advocates from 2019, Morocco extradited activist Hassan al-Rabea to Saudi Arabia after he was arrested on a warrant that The New Arab reported was issued by the Council. Hassan’s brother Munir is wanted by the Saudi government due to his involvement in the country’s 2011 protest movement. Their older brother, Ali, is already in a Saudi prison, where he is facing the death penalty. Another of al-Rabea’s brothers, Ahmed, told me over the phone from Canada that he is now extremely careful about where he travels: “For me, like all my brothers, it is extremely scary to go to any Arab country,” he said.

Agreements enabling more extradition cooperation among Arab states and other nearby countries also are being adopted widely. In 2020, Morocco, Sudan, the UAE and Bahrain signed an agreement with Israel known as the Abraham Accords, which established official relations between the signatories. Since then, Morocco and the UAE in particular have increased their use of repressive technologies developed by Israeli companies when targeting dissidents abroad. Last year, 24% of Israel’s defense exports were to Arab Accords signatories. In 2021, Egypt signed an agreement to strengthen military cooperation with Sudan after years of tensions, including a border dispute. 

Members of the Arab Interior Ministers Council are signatories to the Riyadh Arab Agreement for Judicial Cooperation and the Arab Convention for the Suppression of Terrorism, which prohibit extraditions if the crime is of a “political nature.”

Three U.N. special rapporteurs in June wrote a letter to the Arab League stating that red notices issued by the Council do not comply with member states’ commitments under international law, such as non-refoulement, non-discrimination, due diligence and fair trial.

Saudi Arabian Crown Prince Mohammed bin Salman greets President of Egypt Abdel Fattah El-Sisi ahead of the 32nd Arab League Summit in Jeddah, Saudi Arabia on May 19, 2023. Bandar Aljaloud/Royal Court of Saudi Arabia/Handout/Anadolu Agency via Getty Images.

A few weeks after Osman’s arrest, Virta returned to the U.S. for her job. She adjusted her schedule to work different hours, so she could be awake for part of the night working on his release.

Behind bars in Dubai, Osman was struggling to sleep. “The second I opened my eyes my head would go numb, the exact second my eyes opened, I realized I am in deep shit,” he told me. “I can count the days that I had a full night’s sleep on one hand and have left over fingers.”

Virta was certain the UAE was going to extradite him to Egypt. But then, late one night towards the end of December, she got a call.

“I have some good news,” Osman told her. He was going to be released.

Osman was taken to the airport five days later, but it was not until the plane door closed that he allowed himself to believe he was actually going home. When the door clicked shut, he passed out from exhaustion. Osman had spent 46 days in detention.

This past July, Osman filed a lawsuit at the U.S District Court in Washington, D.C. against Interpol and its major general Ahmed Naser Al-Raisi, the UAE and its deputy prime minister, Egypt and its president Abdel Fattah El-Sisi, the Arab Interior Ministers Council, a UAE prosecutor and four other unnamed individuals. The complaint accuses them of international terrorism for their “kidnapping, abduction, imprisonment, prosecution, and threatened extradition” of Osman.

The 32nd Arab League Summit in Jeddah, Saudi Arabia on May 19, 2023. Bandar Aljaloud/Royal Court of Saudi Arabia/Handout/Anadolu Agency via Getty Images.

The lawsuit accuses Interpol of colluding to shift the justification for Osman’s detention from an Interpol red notice to one issued by the Arab Interior Ministers Council. An Interpol spokesperson said “there is no indication that a notice or diffusion ever existed in Interpol’s databases,” but Osman’s lawyers say otherwise.

Osman hopes that the case will push Interpol to agree to reforms, such as improving its system for reviewing cases in order to determine whether they are politically motivated. If his lawyers can prove that what the Arab Interior Ministers Council did was an act of terrorism, Osman expects this will make it much harder for Arab states to justify their participation in its functions. “Funding it would be very hard at that point,” he said, as it would effectively mean that the Arab league was funding a terrorist organization. One of Osman’s lawyers also is seeking an agreement from the UAE to stop accepting red notices for U.S. citizens by way of the Council.

Osman and Virta now live in a small city in Massachusetts, where they largely keep to themselves. “The speed limit is 35 miles and people don’t say hi to each other. It’s New England, so everybody’s an asshole,” said Osman. “There’s even a word for it: ‘Massholes.’”

He sees a psychologist who specializes in post-traumatic stress disorder. Osman says it is helping him understand what feels like a “new self.”

Osman is trying to launch a cannabis cultivation business, which missed out on some vital funding when investors heard about his arrest. He stayed quiet for six months after his release, but recently went back to posting about Egypt’s human rights record online. 

“I’m back again, talking and tearing down the president and his regime and military regime without mercy,” he said. “I got the news that they are worried in Egypt about my case.”

CORRECTION (09/29/2023): An earlier version of this article described Jamal Khashoggi as a U.S. citizen. It has been corrected to reflect that Khashoggi was a U.S. resident.

The post For Arab dissidents, the walls are closing in appeared first on Coda Story.

]]>