Technology - Coda Story https://www.codastory.com/authoritarian-tech/ stay on the story Thu, 30 Nov 2023 10:26:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 https://www.codastory.com/wp-content/uploads/2019/07/cropped-LogoWeb2021Transparent-1-32x32.png Technology - Coda Story https://www.codastory.com/authoritarian-tech/ 32 32 When deepfakes go nuclear https://www.codastory.com/authoritarian-tech/ai-nuclear-war/ Tue, 28 Nov 2023 14:01:33 +0000 https://www.codastory.com/?p=48430 Governments already use fake data to confuse their enemies. What if they start doing this in the nuclear realm?

The post When deepfakes go nuclear appeared first on Coda Story.

]]>
Two servicemen sit in an underground missile launch facility. Before them is a matrix of buttons and bulbs glowing red, white and green. Old-school screens with blocky, all-capped text beam beside them. Their job is to be ready, at any time, to launch a nuclear strike. Suddenly, an alarm sounds. The time has come for them to shoot their deadly weapon.

Why did we write this story?

AI-generated deepfakes could soon begin to affect military intelligence communications. In line with our focus on authoritarianism and technology, this story delves into the possible consequences that could emerge as AI makes its way into the nuclear arena.

With the correct codes input, the doors to the missile silo open, pointing a bomb at the sky. Sweat shines on their faces. For the missile to fly, both must turn their keys. But one of them balks. He picks up the phone to call their superiors.

That’s not the procedure, says his partner. “Screw the procedure,” the dissenter says. “I want somebody on the goddamn phone before I kill 20 million people.” 

Soon, the scene — which opens the 1983 techno-thriller “WarGames” — transitions to another set deep inside Cheyenne Mountain, a military outpost buried beneath thousands of feet of Colorado granite. It exists in real life and is dramatized in the movie. 

In “WarGames,” the main room inside Cheyenne Mountain hosts a wall of screens that show the red, green and blue outlines of continents and countries, and what’s happening in the skies above them. There is not, despite what the servicemen have been led to believe, a nuclear attack incoming: The alerts were part of a test sent out to missile commanders to see whether they would carry out orders. All in all, 22% failed to launch.

“Those men in the silos know what it means to turn the keys,” says an official inside Cheyenne Mountain. “And some of them are just not up to it.” But he has an idea for how to combat that “human response,” the impulse not to kill millions of people: “I think we ought to take the men out of the loop,” he says. 

From there, an artificially intelligent computer system enters the plotline and goes on to cause nearly two hours of potentially world-ending problems. 

Discourse about the plot of “WarGames” usually focuses on the scary idea that a computer nearly launches World War III by firing off nuclear weapons on its own. But the film illustrates another problem that has become more trenchant in the 40 years since it premiered: The computer displays fake data about what’s going on in the world. The human commanders believe it to be authentic and respond accordingly.

In the real world, countries — or rogue actors — could use fake data, inserted into genuine data streams, to confuse enemies and achieve their aims. How to deal with that possibility, along with other consequences of incorporating AI into the nuclear weapons sphere, could make the coming years on Earth more complicated.

The word “deepfake” didn’t exist when “WarGames” came out, but as real-life AI grows more powerful, it may become part of the chain of analysis and decision-making in the nuclear realm of tomorrow. The idea of synthesized, deceptive data is one AI issue that today’s atomic complex has to worry about.

You may have encountered the fruits of this technology in the form of Tom Cruise playing golf on TikTok, LinkedIn profiles for people who have never inhabited this world or, more seriously, a video of Ukrainian President Volodymyr Zelenskyy declaring the war in his country to be over. These are deepfakes — pictures or videos of things that never happened, but which can look astonishingly real. It becomes even more vexing when AI is used to create images that attempt to depict things that are indeed happening. Adobe recently caused a stir by selling AI-generated stock photos of violence in Gaza and Israel. The proliferation of this kind of material (alongside plenty of less convincing stuff) leads to an ever-present worry any image presented as fact might actually have been fabricated or altered. 

It may not matter much whether Tom Cruise was really out on the green, but the ability to see or prove what’s happening in wartime — whether an airstrike took place at a particular location or whether troops or supplies are really amassing at a given spot — can actually affect the outcomes on the ground. 

Similar kinds of deepfake-creating technologies could be used to whip up realistic-looking data — audio, video or images — of the sort that military and intelligence sensors collect and that artificially intelligent systems are already starting to analyze. It’s a concern for Sharon Weiner, a professor of international relations at American University. “You can have someone trying to hack your system not to make it stop working, but to insert unreliable data,” she explained.

James Johnson, author of the book “AI and the Bomb,” writes that when autonomous systems are used to process and interpret imagery for military purposes, “synthetic and realistic-looking data” can make it difficult to determine, for instance, when an attack might be taking place. People could use AI to gin up data designed to deceive systems like Project Maven, a U.S. Department of Defense program that aims to autonomously process images and video and draw meaning from them about what’s happening in the world.

AI’s role in the nuclear world isn’t yet clear. In the U.S., the White House recently issued an executive order about trustworthy AI, mandating in part that government agencies address the nuclear risks that AI systems bring up. But problem scenarios like some of those conjured by “WarGames” aren’t out of the realm of possibility. 

In the film, a teenage hacker taps into the military’s system and starts up a game he finds called “Global Thermonuclear War.” The computer displays the game data on the screens inside Cheyenne Mountain, as if it were coming from the ground. In the Rocky Mountain war room, a siren soon blares: It looks like Soviet missiles are incoming. Luckily, an official runs into the main room in a panic. “We’re not being attacked,” he yells. “It’s a simulation!””

In the real world, someone might instead try to cloak an attack with deceptive images that portray peace and quiet.

Researchers have already shown that the general idea behind this is possible: Scientists published a paper in 2021 on “deepfake geography,” or simulated satellite images. In that milieu, officials have worried about images that might show infrastructure in the wrong location or terrain that’s not true to life, messing with military plans. Los Alamos National Laboratory scientists, for instance, made satellite images that included vegetation that wasn’t real and showed evidence of drought where the water levels were fine, all for the purposes of research. You could theoretically do the same for something like troop or missile-launcher movement.

AI that creates fake data is not the only problem: AI could also be on the receiving end, tasked with analysis. That kind of automated interpretation is already ongoing in the intelligence world, although it’s unclear specifically how it will be incorporated into the nuclear sphere. For instance, AI on mobile platforms like drones could help process data in real time and “alert commanders of potentially suspicious or threatening situations such as military drills and suspicious troop or mobile missile launcher movements,” writes Johnson. That processing power could also help detect manipulation because of the ability to compare different datasets. 

But creating those sorts of capabilities can help bad actors do their fooling. “They can take the same techniques these AI researchers created, invert them to optimize deception,” said Edward Geist, an analyst at the RAND Corporation. For Geist, deception is a “trivial statistical prediction task.” But recognizing and countering that deception is where the going gets tough. It involves a “very difficult problem of reasoning under uncertainty,” he told me. Amid the generally high-stakes feel of global dynamics, and especially in conflict, countries can never be exactly sure what’s going on, who’s doing what, and what the consequences of any action may be.

There is also the potential for fakery in the form of data that’s real: Satellites may accurately display what they see, but what they see has been expressly designed to fool the automated analysis tools.

As an example, Geist pointed to Russia’s intercontinental ballistic missiles. When they are stationary, they’re covered in camo netting, making them hard to pick out in satellite images. When the missiles are on the move, special devices attached to the vehicles that carry them shoot lasers toward detection satellites, blinding them to the movement. At the same time, decoys are deployed — fake missiles dressed up as the real deal, to distract and thwart analysis. 

“The focus on using AI outstrips or outpaces the emphasis put on countermeasures,” said Weiner.

Given that both physical and AI-based deception could interfere with analysis, it may one day become hard for officials to trust any information — even the solid stuff. “The data that you’re seeing is perfectly fine. But you assume that your adversary would fake it,” said Weiner. “You then quickly get into the spiral where you can’t trust your own assessment of what you found. And so there’s no way out of that problem.” 

From there, it’s distrust all the way down. “The uncertainties about AI compound the uncertainties that are inherent in any crisis decision-making,” said Weiner. Similar situations have arisen in the media, where it can be difficult for readers to tell if a story about a given video — like an airstrike on a hospital in Gaza, for instance — is real or in the right context. Before long, even the real ones leave readers feeling dubious.

Ally Sheedy and Matthew Broderick in the 1983 MGM/UA movie “WarGames” circa 1983. Hulton Archive/Getty Images.

More than a century ago, Alfred von Schlieffen, a German war planner, envisioned the battlefield of the future: a person sitting at a desk with telephones splayed across it, ringing in information from afar. This idea of having a godlike overview of conflict — a fused vision of goings-on — predates both computers and AI, according to Geist.

Using computers to synthesize information in real-time goes back decades too. In the 1950s, for instance, the U.S. built the Continental Air Defense Command, which relied on massive machines (then known as computers) for awareness and response. But tests showed that a majority of Soviet bombers would have been able to slip through — often because they could fool the defense system with simple decoys. “It was the low-tech stuff that really stymied it,” said Geist. Some military and intelligence officials have concluded that next-level situational awareness will come with just a bit more technological advancement than they previously thought — although this has not historically proven to be the case. “This intuition that people have is like, ‘Oh, we’ll get all the sensors, we’ll buy a big enough computer and then we’ll know everything,’” he said. “This is never going to happen.”

This type of thinking seems to be percolating once again and might show up in attempts to integrate AI in the near future. But Geist’s research, which he details in his forthcoming book “Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare,” shows that the military will “be lucky to maintain the degree of situational awareness we have today” if they incorporate more AI into observation and analysis in the face of AI-enhanced deception. 

“One of the key aspects of intelligence is reasoning under uncertainty,” he said. “And a conflict is a particularly pernicious form of uncertainty.” An AI-based analysis, no matter how detailed, will only ever be an approximation — and in uncertain conditions there’s no approach that “is guaranteed to get an accurate enough result to be useful.” 

Creative Commons (CC BY 4.0) / NOIRLab/NSF/AURA.

In the movie, with the proclamation that the Soviet missiles are merely simulated, the crisis is temporarily averted. But the wargaming computer, unbeknownst to the authorities, is continuing to play. As it keeps making moves, it displays related information about the conflict on the big screens inside Cheyenne Mountain as if it were real and missiles were headed to the States. 

It is only when the machine’s inventor shows up that the authorities begin to think that maybe this could all be fake. “Those blips are not real missiles,” he says. “They’re phantoms.”

To rebut fake data, the inventor points to something indisputably real: The attack on the screens doesn’t make sense. Such a full-scale wipeout would immediately prompt the U.S. to total retaliation — meaning that the Soviet Union would be almost ensuring its own annihilation. 

Using his own judgment, the general calls off the U.S.’s retaliation. As he does so, the missiles onscreen hit the 2D continents, colliding with the map in circular flashes. But outside, in the real world, all is quiet. It was all a game. “Jesus H. Christ,” says an airman at one base over the comms system. “We’re still here.”

Similar nonsensical alerts have appeared on real-life screens. Once, in the U.S., alerts of incoming missiles came through due to a faulty computer chip. The system that housed the chip sent erroneous missile alerts on multiple occasions. Authorities had reason to suspect the data was likely false. But in two instances, they began to proceed as if the alerts were real. “Even though everyone seemed to realize that it’s an error, they still followed the procedure without seriously questioning what they were getting,” said Pavel Podvig, senior researcher at the United Nations Institute for Disarmament Research and a researcher at Princeton University. 

In Russia, meanwhile, operators did exercise independent thought in a similar scenario, when an erroneous preliminary launch command was sent. “Only one division command post actually went through the procedure and did what they were supposed to do,” he said. “All the rest said, ‘This has got to be an error,’” because it would have been a surprise attack not preceded by increasing tension, as expected. It goes to show, Podvig said, “people may or may not use their judgment.” 

You can imagine in the near future, Podvig continued, nuclear operators might see an AI-generated assessment saying circumstances were dire. In such a situation, there is a need “to instill a certain kind of common sense” he said, and make sure that people don’t just take whatever appears on a screen as gospel. “The basic assumptions about scenarios are important too,” he added. “Like, do you assume that the U.S. or Russia can just launch missiles out of the blue?”

People, for now, will likely continue to exercise judgment about attacks and responses — keeping, as the jargon goes, a “human in the loop.”

The idea of asking AI to make decisions about whether a country will launch nuclear missiles isn’t an appealing option, according to Geist, though it does appear in movies a lot. “Humans jealously guard these prerogatives for themselves,” Geist said. 

“It doesn’t seem like there’s much demand for a Skynet,” he said, referencing another movie, “Terminator,” where an artificial general superintelligence launches a nuclear strike against humanity.

Podvig, an expert in Russian nuclear goings-on, doesn’t see much desire for autonomous nuclear operations in that country. 

“There is a culture of skepticism about all this fancy technological stuff that is sent to the military,” he said. “They like their things kind of simple.” 

Geist agreed. While he admitted that Russia is not totally transparent about its nuclear command and control, he doesn’t see much interest in handing the reins to AI.

China, of course, is generally very interested in AI, and specifically in pursuing artificial general intelligence, a type of AI which can learn to perform intellectual tasks as well as or even better than humans can.

William Hannas, lead analyst at the Center for Security and Emerging Technology at Georgetown University, has used open-source scientific literature to trace developments and strategies in China’s AI arena. One big development is the founding of the Beijing Institute for General Artificial Intelligence, backed by the state and directed by former UCLA professor Song-Chun Zhu, who has received millions of dollars of funding from the Pentagon, including after his return to China. 

Hannas described how China has shown a national interest in “effecting a merger of human and artificial intelligence metaphorically, in the sense of increasing mutual dependence, and literally through brain-inspired AI algorithms and brain-computer interfaces.”

“A true physical merger of intelligence is when you’re actually lashed up with the computing resources to the point where it does really become indistinguishable,” he said. 

That’s relevant to defense discussions because, in China, there’s little separation between regular research and the military. “Technological power is military power,” he said. “The one becomes the other in a very, very short time.” Hannas, though, doesn’t know of any AI applications in China’s nuclear weapons design or delivery. Recently, U.S. President Joe Biden and Chinese President Xi Jinping met and made plans to discuss AI safety and risk, which could lead to an agreement about AI’s use in military and nuclear matters. Also, in August, regulations on generative AI developed by China’s Cyberspace Administration went into effect, making China a first mover in the global race to regulate AI.

It’s likely that the two countries would use AI to help with their vast streams of early-warning data. And just as AI can help with interpretation, countries can also use it to skew that interpretation, to deceive and obfuscate. All three tasks are age-old military tactics — now simply upgraded for a digital, unstable age.

Science fiction convinced us that a Skynet was both a likely option and closer on the horizon than it actually is, said Geist. AI will likely be used in much more banal ways. But the ideas that dominate “WarGames” and “Terminator” have endured for a long time. 

“The reason people keep telling this story is it’s a great premise,” said Geist. “But it’s also the case,” he added, “that there’s effectively no one who thinks of this as a great idea.” 

It’s probably so resonant because people tend to have a black-and-white understanding of innovation. “There’s a lot of people very convinced that technology is either going to save us or doom us,” said Nina Miller, who formerly worked at the Nuclear Threat Initiative and is currently a doctoral student at the Massachusetts Institute of Technology. The notion of an AI-induced doomsday scenario is alive and well in the popular imagination and also has made its mark in public-facing discussions about the AI industry. In May, dozens of tech CEOs signed an open letter declaring that “mitigating the risk of extinction from AI should be a global priority,” without saying much about what exactly that means. 

But even if AI does launch a nuclear weapon someday (or provide false information that leads to an atomic strike), humans still made the decisions that led us there. Humans created the AI systems and made choices about where to use them. 

And, besides, in the case of a hypothetical catastrophe, AI didn’t create the environment that led to a nuclear attack. “Surely the underlying political tension is the problem,” said Miller. And that is thanks to humans and their desire for dominance — or their motivation to deceive. 

Maybe the humans need to learn what the computer did at the end of “WarGames.” “The only winning move,” it concludes, “is not to play.”

The post When deepfakes go nuclear appeared first on Coda Story.

]]>
In India, Big Brother is watching https://www.codastory.com/authoritarian-tech/india-surveillance-modi-democratic-freedoms/ Tue, 21 Nov 2023 09:53:22 +0000 https://www.codastory.com/?p=48360 Apple warned Indian journalists and opposition politicians last month that their phones had likely been hacked by a state-sponsored attacker. Is this more evidence of democratic backsliding?

The post In India, Big Brother is watching appeared first on Coda Story.

]]>
Last month, journalist Anand Mangnale woke to find a disturbing notification from Apple on his mobile phone: “State-sponsored attackers may be targeting your iPhone.” He was one of at least a dozen journalists and Indian opposition politicians who said they had received the same message. “These attackers are likely targeting you individually because of who you are and what you do,” the warning read. “While it’s possible this is a false alarm, please take it seriously.”

Why This Story?

India, the world’s most populous democracy, goes to the polls next year and is likely to reelect Narendra Modi for a third consecutive five-year term. But evidence is mounting that India’s democratic freedoms are in regression.

Mangnale is an editor at the Organized Crime and Corruption Reporting Project, a global non-profit media outlet. In August, he and his co-authors Ravi Nair and NBR Arcadio published a detailed inquiry into labyrinthine offshore investment structures through which the Adani Group — an India-based multibillion-dollar conglomerate with interests in everything from ports, infrastructure and cement to green energy, cooking oil and apples — might have been manipulating its stock price. The documents were shared with both Financial Times and The Guardian, which also published lengthy stories alleging that the Adani Group appeared to be using funds from shell companies in Mauritius to break Indian stock market rules.

Mangnale’s phone was attacked with spyware just hours after reporters had submitted questions to the Adani Group in August for their investigation, according to an OCCRP press release. Mangnale hadn’t sent the questions, but as the regional editor, his name was easy to find on the OCCRP website.

OCCRP stated in a press release that Mangnale’s phone was attacked with spyware just hours after it submitted questions to the Adani Group in August for its report. Mangnale hadn’t sent the questions, but as the regional editor, his name was easy to find on the OCCRP website.

Gautam Adani, the Adani Group’s chairman and the second richest person in India, has been close to Indian Prime Minister Narendra Modi for decades. When Modi was campaigning in the 2014 general elections, which brought him to power with a sweeping majority, he used a jet and two helicopters owned by the Adani Group to crisscross the country. Modi’s perceived bond with Adani as well as with Mukesh Ambani, India’s richest man — all three come from the prosperous western Indian state of Gujarat — has for years given rise to accusations of crony capitalism and suggestions that India now has its own set of Russian-style oligarchs.

The Adani Group’s supposed influence on Modi is a major campaign issue for opposition parties, many of which are coming together in a coalition to take on the ruling Bharatiya Janata Party in the 2024 general election. According to Rahul Gandhi — leader of the opposition Congress party and scion of the Nehru-Gandhi dynasty, which has provided three Indian prime ministers — the Adani Group is so close to power it is practically synonymous with the government. He said Apple’s threat notifications showed that the government was hacking the phones of politicians who sought to expose Adani and his hold over Modi. 

Mahua Moitra, a prominent opposition politician and outspoken critic of Adani, reported that she had also received the warning from Apple to her phone. She posted on X: “Adani and PMO bullies — your fear makes me pity you.” PMO stands for the prime minister’s office.   

Mangnale, referring to the opposition’s allegations, told me that there was only circumstantial evidence to suggest that the Apple notification could be tied to the Indian government. As for his own phone, a forensic analysis commissioned by OCCRP did not indicate which government or government agency was behind the attack, nor did it surface any evidence that the Adani Group was involved. But the timing raised eyebrows, as the Modi government has been accused in the past of using spyware on political opponents, critical journalists, scholars and lawyers. 

In 2019, the messaging service WhatsApp, owned by Meta, filed a lawsuit in a U.S. federal court against the Israel-based NSO Group, developers of a spyware called Pegasus, in which it was revealed that the software had been used to target Indian journalists and activists. A year later, The Pegasus Project, an international journalistic investigation, reported that the phone numbers of at least 300 Indian individuals — Rahul Gandhi among them — had been slated for targeting with the eponymous weapons-grade spyware. And last year, The New York Times reported that Pegasus spyware was included in a $2 billion defense deal that Modi signed in 2017, on the first ever visit made by an Indian prime minister to Israel. In November 2021, Apple sued NSO too, arguing that in a “free society, it is unacceptable to weaponize powerful state-sponsored spyware against those who seek to make the world a better place.” 

What is happening to Mangnale is the most recent iteration of a script that has been playing out for the last nine years. India’s democratic regression is evident in its declining scores in a variety of international indices. In the latest World Press Freedom Index, compiled by Reporters Without Borders, India ranks 161 out of 180 countries, and its score has been declining sharply since 2017. According to RSF, “violence against journalists, the politically partisan media and the concentration of media ownership all demonstrate that press freedom is in crisis.”  

By May next year, India will hold general elections, in which Modi is expected to win a third consecutive five-year term as prime minister and further entrench a Hindu nationalist agenda. Since 2014, as India has become a strategic potential counterweight to runaway Chinese power and influence in the Indo-Pacific region, Modi has reveled in being increasingly visible on the global stage. Abroad, he has brandished India’s credentials as a pluralist democracy. The mounting criticism in the Western media of his authoritarian tendencies and Hindu chauvinism has seemingly had little effect on India’s diplomatic standing. Meanwhile at home, Modi has arguably been using — perhaps misusing — the full authority of the prime minister’s office to stifle opposition critics. 

Indian Prime Minister Narendra Modi and billionaire businessman Gautam Adani (left) have long had a mutually beneficial relationship that critics allege crosses the line into crony capitalism. Vijay Soneji/Mint via Getty Images.

The morning after Apple sent out its warning, there was an outpouring of anger on social media, with leading opposition figures accusing the government of spying. Apple, as a matter of course, says it is “unable to provide information about what causes us to issue threat notifications.” The logic is that such information “may help state-sponsored attackers adapt their behavior to evade detection in the future.” But the lack of information leaves a gap that is then filled by speculation and conspiracies. Apple’s circumspect message, containing within it the possibility that the threat notification might be false altogether, also gives governments plausible deniability.

Right on cue, Ashwini Vaishnaw, India’s minister of information and technology, managed in a single statement to claim that the government was concerned about Apple’s notification and would “get to the bottom of it” while also dismissing surveillance concerns as just bellyaching. “There are many compulsive critics in our country,” Vaishnaw said about the allegations from opposition politicians. “Their only job is to criticize the government.” Lawyer Apar Gupta, founder of the Internet Freedom Foundation, described Vaishnaw’s statements as an attempt to “trivialize or misdirect public attention.”

Finding that his phone had been attacked by spyware was not the only example of Mangnale being targeted after OCCRP published its investigation into the Adani Group’s possibly illegal stock manipulation. In October, the Gujarat police summoned Mangnale and his co-author Ravi Nair to the state capital Ahmedabad to question them about the OCCRP report. Neither journalist lives in the state, which made the police summons, based on a single complaint by an investor in Adani stocks, seem like intimidation. It took the intervention of India’s Supreme Court to grant both journalists temporary protection from arrest.

Before the Supreme Court, the well-known lawyer Indira Jaising had argued that the Gujarat police had no jurisdiction to arbitrarily summon Mangnale and Nair to the state without informing them in what capacity they were being questioned. It seemed, she told the court, like a “prelude to arrest” and thus a violation of their constitutional right to personal liberty. A week later, the Supreme Court made a similar ruling to protect two Financial Times correspondents based in India from arrest. The journalists, in Mumbai and Delhi, had not even written the article based on documents shared by the OCCRP, but were still summoned by police to Gujarat. On December 1, the police are expected to explain to the Supreme Court why they are seemingly so eager to question the reporters.

While the mainstream television news networks in India frequently and loudly debate news topics on air, there is little coverage of the pressure that the Indian government puts on individuals who try to hold the government to account. Ravish Kumar, an esteemed Hindi-language journalist, told me that few people in India were aware of the threat to journalists and opposition voices in Modi’s India. “When people hear allegations made by political figures such as Rahul Gandhi, they can be dismissed as politics rather than fact. There is no serious discussion of surveillance in the press,” he said. 

Kumar once had a substantial platform on NDTV, a respected news network that had built its reputation over decades. In March this year, the Adani Group completed a hostile takeover of NDTV, leading to a series of resignations by the network’s most recognizable anchors and editors, including Kumar. NDTV is now yet another of India’s television news networks owned by corporations that are either openly friendly to the Modi government or unwilling to jeopardize their other businesses by being duly critical. 

Nowadays, Kumar reports for his personal YouTube channel, albeit one with about 7.8 million subscribers. A documentary about his lonely fight to keep reporting from India both accurately and skeptically was screened in cinemas across the U.K. and U.S. in July. 

According to Kumar, journalists and critics are naturally fearful about the Indian government’s punitive measures because some have ended up in prison on the basis of dubious evidence found on their phones and laptops. Most notoriously, a group of reputed academics, writers and human rights activists were accused of inciting riots in 2018 and plotting to assassinate the prime minister. Independent analysts hired by The Washington Post reported that the electronic evidence in the case was likely planted. 

Some of this possibly planted evidence was found on the computer of Stan Swamy, an octogenarian Jesuit priest who was charged with crimes under India’s anti-terror law and died in 2021 as he awaited trial. Swamy suffered from Parkinson’s disease, which can make everyday actions like eating and drinking difficult. While in custody, he was treated so poorly by the authorities that he had to appeal for a month before he was given a straw to make it easier for him to drink.

The threat of arrest hangs like a Damoclean sword above the heads of journalists like Mangnale who dare to ask questions of power and investigate institutional corruption. Despite the interim stay on his arrest, Mangnale still faces further court proceedings and the possibility of interrogation by the Gujarat police. In the words of Drew Sullivan, OCCRP’s publisher: “The police hauling in reporters for vague reasons seems to represent state-sanctioned harassment of journalists and is a direct assault on freedom of expression in the world’s largest democracy.”

The post In India, Big Brother is watching appeared first on Coda Story.

]]>
In Africa’s first ‘safe city,’ surveillance reigns https://www.codastory.com/authoritarian-tech/africa-surveillance-china-magnum/ Wed, 08 Nov 2023 13:33:21 +0000 https://www.codastory.com/?p=48029 Nairobi boasts nearly 2,000 Huawei surveillance cameras citywide. But in the nine years since they were installed, it is hard to see their benefits.

The post In Africa’s first ‘safe city,’ surveillance reigns appeared first on Coda Story.

]]>
Nairobi purchased its massive traffic surveillance system in 2014 as the country was grappling with a terrorism crisis.
Today, the city boasts nearly 2,000 Huawei surveillance cameras citywide, all sending data to the police.
On paper, the system promised the ultimate silver bullet: It put real-time surveillance tools into the hands of more than 9,000 police officers. But do the cameras work?

In Africa’s first ‘safe city,’ surveillance reigns

Lights, cameras, what action? In Nairobi, the question looms large for millions of Kenyans, whose every move is captured by the flash of a CCTV camera at intersections across the capital.

Though government promises of increased safety and better traffic control seem to play on a loop, crime levels here continue to rise. In the 1990s, Nairobi, with its abundant grasslands, forests and rivers, was known as the “Green City in the Sun.” Today, we more often call it “Nairobbery.”

Special series

This is the third in a series of multimedia collaborations on evolving systems of surveillance in medium-sized cities around the world by photographers at Magnum Photos, data geographers at the Edgelands Institute, an organization that explores how the digitalization of urban security is changing the urban social contract, and essayists commissioned by Coda Story.

Our first two essays examined surveillance in Medellín, Colombia and Geneva, Switzerland. Next up: Singapore.

I see it every time I venture into Nairobi’s Central Business District. Navigating downtown Nairobi on foot can feel like an extreme sport. I clutch my handbag, keep my phone tucked away and walk swiftly to dodge “boda boda” (motorbike) riders and hawkers whose claim on pedestrian walks is quasi-authoritarian. Every so often, I’ll hear a woman scream “mwizi!” and then see a thief dart down an alleyway. If not that, it will be a motorist hooting loudly at a traffic stop to alert another driver that their vehicle is being stripped of its parts, right then and there.

Every city street is dotted with cameras. They fire off a blinding flash each time a car drives past. But other than that, they seem to have little effect. I have yet to hear of or witness an incident in which thugs were about to rob someone, looked up, saw the CCTV cameras then stopped and walked away.

Nairobi launched its massive traffic surveillance system in 2014 as the country was grappling with a terrorism crisis. A series of major attacks by al-Shabab militants, including the September 2013 attack at Nairobi’s Westgate shopping complex in which 67 people were killed, left the city reeling and politicians under extreme pressure to implement solutions. A modern, digitized surveillance system became a national security priority. And the Chinese tech hardware giant Huawei was there to provide it. 

A joint contract between Huawei and Kenya’s leading telecom, Safaricom, brought us the Integrated Urban Surveillance System, and we became the site of Huawei’s first “Safe City” project in Africa. Hundreds of cameras were deployed across Nairobi’s Central Business District and major highways, all networked and sending data to Kenya’s National Police Headquarters. Nairobi today boasts nearly 2,000 CCTV cameras citywide.

On paper, the system promised the ultimate silver bullet: It put real-time surveillance tools into the hands of more than 9,000 police officers to support crime prevention, accelerated responses and recovery. Officials say police monitor the Kenyan capital at all times and quickly dispatch first responders in case of an emergency.

But do the cameras work? Nine years since they were installed, it is hard to see the benefits of these electronic eyes that follow us around the city day after day.

Early on, Huawei claimed that from 2014 to 2015, crime had decreased by 46% in areas supported by their technologies, but the company has since scrubbed its website of this report. Kenya’s National Police Service reported a smaller drop in crime rates in 2015 in Nairobi, and an increase in Mombasa, the other major city where Huawei’s cameras were deployed. But by 2017, Nairobi’s reported crime rates surpassed pre-installation levels.

According to a June 2023 report by Coda’s partners at the Edgelands Institute, an organization that studies the digitalization of urban security, there has been a steady rise in criminal activity in Nairobi for nearly a decade.

So why did Nairobi adopt this system in the first place? One straightforward answer: Kenya had a problem, and China offered a solution. The Kenyan authorities had to take action and Huawei had cameras to sell. So they made a deal.

Nairobi’s surveillance apparatus today has become part of the “Digital Silk Road” — China’s quest to wire the world. It is a central component of the Belt and Road Initiative, an ambitious global infrastructure development strategy that has spread China’s economic and political influence across the world. 

This hasn’t been easy for China in the industrialized West, with companies like Huawei battling sanctions by the U.S. and legal obstacles both in the U.K. and European Union countries. But in Africa, the Chinese technology giant has a quasi-monopoly on telecommunications infrastructure and technology deployment. Components from the company make up around 70% of 4G networks across the continent.

Chinese companies also have had a hand in building or renovating nearly 200 government buildings across the continent. They have built secure intra-governmental telecommunications networks and gifted computers to at least 35 African governments, according to research by the Heritage Foundation.

Grace Bomu Mutung’u, a Kenyan scholar of IT policy in Kenya and Africa, currently working with the Open Society Foundations, sees this as part of a race to develop and dominate network infrastructure, and to use this position to gather and capitalize on data that flows through networks.

“The Chinese are way ahead of imperial companies because they are approaching it from a different angle,” she told me. She posits that for China, the Digital Silk Road is meant to set a foundation for an artificial intelligence-based economy that China can control and profit from. Mutung’u derided African governments for being so beholden to development that their leaders keep missing the forest for the trees. “We seem to be caught in this big race. We have yet to define for ourselves what we want from this new economy.”

The failure to define what Africa wants from the data-driven economy and an obsession with basic infrastructure development projects is taking the continent through what feels like another Berlin scramble, Mutung’u told me, referring to the period between the 19th and early 20th centuries that saw European powers increase their stake in Africa from around 10% to about 90%.

“Everybody wants to claim a part of Africa,” she said. “If it wasn’t the Chinese, there would be somebody else trying to take charge of resources.” Mutung’u was alluding to China’s strategy of financing African infrastructure projects in exchange for the continent’s natural resources.

A surveillance camera in one of Nairobi’s matatu buses.

Nairobi was the first city in Africa to deploy Huawei’s Safe City system. Since then, cities in Egypt, Nigeria, South Africa and a dozen other countries across the continent have followed suit. All this has drawn scrutiny from rights groups who see the company as a conduit in the exportation of China’s authoritarian surveillance practices. 

Indeed, Nairobi’s vast web of networked CCTV cameras offers little in the way of transparency or accountability, and experts like Mutung’u say the country doesn’t have sufficient data protection laws in place to prevent the abuse of data moving through surveillance systems. When the surveillance system was put in place in 2014, the country had no data protection laws. Kenya’s Personal Data Protection Act came into force in 2019, but the Office of the Data Protection Commissioner has yet to fully implement and enforce the law.

In a critique of what he described at the time as a “massive new spying system,” human rights lawyer and digital rights expert Ephraim Kenyanito argued that the government and Safaricom would be “operating this powerful new surveillance network effectively without checks and balances.” A few years later, in 2017, Privacy International raised concerns about the risks of capturing and storing all this data without clear policies on how that data should be treated or protected.

There was good reason to worry. In January 2018, an investigation by the French newspaper Le Monde revealed that there had been a data breach at the African Union headquarters in Addis Ababa following a hacking incident. Every night for five years, between 2012 and 2017, data downloaded from AU servers was sent to servers located in China. The Le Monde investigation alleged the involvement of the Chinese government, which denied the accusation. In March 2023, another massive cyber attack at AU headquarters left employees without access to the internet and their work emails for weeks.

The most recent incident brought to the fore growing concerns among local experts and advocacy groups about the surveillance of African leaders as Chinese construction companies continue to win contracts to build sensitive African government offices, and Chinese tech companies continue to supply our telecommunication and surveillance infrastructure. But if these fears have had any effect on agreements between the powers that be, it is not evident.

As the cameras on the streets of Nairobi continue to flash, researchers continue to ponder how, if at all, digital technologies are being used in the approach to security, coexistence and surveillance in the capital city.

The Edgelands Institute report found little evidence linking the adoption of surveillance technology and a decrease in crime in Kenya. It did find that a driving factor in rising crime rates was unemployment. For people under 35, the unemployment rate has almost doubled since 2015 and now hovers at 13.5%.

In a 2022 survey by Kenya’s National Crime Research Centre, a majority of respondents identified community policing as the most effective method of crime reduction. Only 4.2% of respondents identified the use of technology such as CCTV cameras as an effective method.

And the system has meanwhile raised concerns among privacy-conscious members of society regarding potential infringement upon the right to privacy for Kenyans and the technical capabilities of these technologies, including AI facial recognition. The secrecy often surrounding this surveillance, the Edgelands Institute report notes, complicates trust between citizens and the state.

It may be some time yet before the lights and the cameras lead to action.

Photographer Lindokuhle Sobekwa’s portable camera obscura uses a box and a magnifying glass to take images for this story.

The post In Africa’s first ‘safe city,’ surveillance reigns appeared first on Coda Story.

]]>
The smart city where everybody knows your name https://www.codastory.com/authoritarian-tech/kazakhstan-smart-city-surveillance/ Thu, 26 Oct 2023 10:05:13 +0000 https://www.codastory.com/?p=47305 In small-town Kazakhstan, an experiment with the “smart city” model has some residents smiling. But it also signals the start of a new mass surveillance era for the Central Asian nation.

The post The smart city where everybody knows your name appeared first on Coda Story.

]]>
At first glance, Aqkol looks like most other villages in Kazakhstan today: shoddy construction, rusting metal gates and drab apartment blocks recall its Soviet past and lay bare the country’s uncertain economic future. But on the village’s outskirts, on a hill surrounded by pine trees, sits a large gray and white cube: a central nervous system connecting thousands of miles of fiber optic cables, sensors and data terminals that keeps tabs on the daily comings and goings of the village’s 13,000 inhabitants. 

This is the command center of Smart Aqkol, a pilot study in digitized urban infrastructure for Kazakhstan. When I visited, Andrey Kirpichnikov, the deputy director of Smart Aqkol, welcomed me inside. Donning a black Fila tracksuit and sneakers, the middle-aged Aqkol native scanned his face at a console that bore the logo for Hikvision, the Chinese surveillance camera manufacturer. A turnstyle gave a green glow of approval and opened, allowing us to walk through. 

“All of our staff can access the building using their unique face IDs,” Kirpichnikov told me.

He led me into a room with a large monitor displaying a schematic of the village. The data inputs and connected elements that make up Smart Aqkol draw on everything from solar panels and gas meters to GPS trackers on public service vehicles and surveillance cameras, he explained. Analysts at the command center report their findings to the mayor’s office, highlighting data on energy use, school attendance rates and evidence for police investigations. 

“I see a huge future in what we’re doing here,” Kirpichnikov told me, gesturing at a heat map of the village on the big screen. “Our analytics keep improving and they are only going to get better as we expand the number of sensory inputs.”

“We’re trying to make life better, more efficient and safer,” he explained. “Who would be opposed to such a project?”

Much of Aqkol’s housing and infrastructure is from the Soviet-era.

Smart Aqkol presents an experimental vision of Kazakhstan’s economic prospects and its technocratic leadership’s governing ambitions. In January 2019, when then-President Nursultan Nazarbayev spoke at the project’s launch, he waxed about a future in which public officials could use networked municipal systems to run Kazakhstan “like a company.” The smart city model is appealing for leaders of the oil-rich nation, which has struggled to modernize its economy and shed its reputation for rampant government corruption. But analysts I spoke with say it also marks a turn toward Chinese-style public surveillance systems. Amid the war in Ukraine, Kazakhstan’s engagement with China has deepened as a way to hedge against dependence on Russia, its former colonial patron.

Kazakhstan’s smart city initiatives aren’t starting from a digital zero. The country has made strides in digitizing public services, and now ranks second among countries of the former Soviet Union in the United Nations’ e-governance development index. (Estonia is number one.) The capital Astana also has established itself as a regional hub for fintech innovation. 

And it’s not only government officials who want these systems. “There is a lot of domestic demand, not just from the state but also from Kazakhstan’s middle class,” said Erica Marat, a professor at the U.S. National Defense University. There’s an allure about smart city systems, which in China and other Asian cities are thought to have improved living standards and reduced crime.

They also hold some promise of increasing transparency around the work of public officials. “The government hopes that digital platforms can overcome cases of petty corruption,” said Oyuna Baldakova, a technology researcher at King’s College London. This would be a welcome shift for Kazakhstan, which currently ranks 101st out of 180 countries on Transparency International’s Corruption Perceptions Index.

Beyond the town’s main street, many roads remain unpaved in Aqkol.

But the pilot in Aqkol doesn’t quite align with these grander ambitions, at least not yet. Back at the command center, Kirpichnikov described how Aqkol saw a drop in violent crime and alcohol-related offenses after the system’s debut. But in a town of this size, where crime rates rarely exceed single digits, these kinds of shifts don’t say a whole lot. 

As if to better prove the point, the team showed me videos of crime dramatizations that they recorded using the Smart Aqkol surveillance camera system. In the first video, one man lifted another off the ground in what was meant to mimic a violent assault, but looked much more like the iconic scene where Patrick Swayze lifts Jennifer Grey overhead at the end of “Dirty Dancing.” Another featured a man brandishing a Kalashnikov in one hand, while using the other to hold his cellphone to his ear. In each case, brightly colored circles and arrows appeared on the screen, highlighting “evidence” of wrongdoing that the cameras captured, like the lift and the Kalashnikov.

Kirpichnikov then led me into Smart Aqkol’s “situation room,” where 14 analysts sat facing a giant LED screen while they tracked various signals around town. Contrary to the high-stakes energy that one might expect in a smart city situation room, the atmosphere here felt more like that of a local pub, with the analysts trading gossip about neighbors as they watched them walk by on monitors for street-level cameras.

Kirpichnikov explained that residents can connect their gas meters to their bank accounts and set up automatic gas payments. This aspect of Smart Aqkol has been a boon for the village. Residents I spoke with praised the new payment system — for decades, the only option was to stand in line to pay for their bills, an exercise that could easily take half a day’s time.

And there was more. To highlight the benefits of Smart Aqkol’s analytics work, Kirpichnikov told me about recent finding: “We were able to determine that school attendance is lower among children from poorly insulated households.” He pointed to a gradation of purple squares showing variance in heating levels across the village. “We could improve school grades, health and the living standards of residents just by updating our old heating systems,” he said.

Kirpichnikov might be right, but step away from the clean digital interface and any Aqkol resident could tell you that poor insulation is a serious problem in the apartment blocks where most people live, especially in winter when temperatures dip below freezing most nights. Broken windows covered with only a thin sheet of cellophane are a common sight. 

Walking around Aqkol, I was struck by the absence of paved roads and infrastructure beyond the village’s main street. Some street lamps work, but others don’t. And the public Wi-Fi that the village prides itself on offering only appeared to function near government buildings.

Informational signs for free Wi-Fi hang across the village despite the network’s limited reach.

The village also has two so-called warm bus shelters — enclosed spaces with heat lamps to shelter waiting passengers during the harsh Kazakh winters. The stops are supposed to have Wi-Fi, charging ports for phones and single-channel TVs. When I passed by one of the shelters, I met an elderly Aqkol resident named Vera. “All of these things are gone,” she told me, waving her hand at evidence of vandalism. “Now all that’s left is the camera at the back.”

“I don’t know why we need all this nonsense here when we barely have roads and running water,” she added with a sigh. “Technology doesn’t make better people.”

Vera isn’t alone in her critique. Smart Aqkol has brought the village an elaborate overlay of digitization, but it’s plain to see that Aqkol still lags far behind modern Kazakh cities like Astana and Almaty when it comes to basic infrastructure. A local resident named Lyubov Gnativa runs a YouTube channel where she talks about Aqkol’s lack of public services and officials’ failures to address these needs. The local government has filed police reports against Gnativa over the years, accusing her of misleading the public.

And a recent documentary made by Radio Free Europe/Radio Liberty — titled “I Love My Town, But There’s Nothing Smart About It” — corroborates many of Gnativa’s observations and includes interviews with with dozens of locals drawing attention to water issues and the lack of insulation in many of the village’s homes.

But some residents say they are grateful for how the system has contributed to public safety. Surveillance cameras now monitor the village’s main thoroughfare from lampposts, as well as inside public schools, hospitals and municipal buildings.

“These cameras change the way people behave and I think that’s a good thing,” said Kirpichnikov. He told a story about a local woman who was recently harassed on a public bench, noting that this kind of interaction would often escalate in the past. “The woman pointed at the camera and the man looked up, got scared and began to walk away.”

A middle-aged schoolteacher named Irina told me she feels much safer since the project was implemented in 2019. “I have to walk through a public park at night and it can be intimidating because a lot of young men gather there,” she said. “After the cameras were installed they never troubled me again.”

A resident of Aqkol.

The Smart Aqkol project was the result of a deal between Kazakhtelecom, Kazakhstan’s national telecommunications company; the Eurasian Resources Group, a state-backed mining company; and Tengri Lab, a tech startup based in Astana. But the hardware came through an agreement under China’s Digital Silk Road initiative, which seeks to wire the world in a way that tends to reflect China’s priorities when it comes to public infrastructure and social control. Smart Aqkol uses surveillance cameras made by Chinese firms Dahua and Hikvision, which in China have been used — and touted, even — for their ability to track “suspicious” people and groups. Both companies are sanctioned by the U.S. due to their involvement in surveilling and aiding in the repression of ethnic Uyghurs in Xinjiang, an autonomous region in western China.

Critics are wary of these kinds of systems in Kazakhstan, where skepticism of China’s intentions in Central Asia has been growing. The country is home to a large Uyghur diaspora of more than 300,000 people, many of whom have deep ties to Xinjiang, where both ethnic Uyghurs and ethnic Kazakhs have been systematically targeted and placed in “re-education” camps. Protests across Kazakhstan in response to China’s mass internment campaign have forced the government to negotiate the release of thousands of ethnic Kazakhs from China, but state authorities have walked this line carefully, in an effort to continue expanding economic ties with Beijing.

Although Kazakhstan requires people to get state permission if they want to hold a protest — and permission is regularly denied — demonstrations nevertheless have become increasingly common in Kazakhstan since 2018. With Chinese-made surveillance tech in hand, it’s become easier than ever for Kazakh authorities to pinpoint unauthorized concentrations of people. Hikvision announced in December 2022 that its software is used by Chinese police to set up “alarms” that are triggered when cameras detect “unlawful gatherings” in public spaces. The company also has claimed that its cameras can detect ethnic minorities based on their unique facial features.

Much of Aqkol’s digitized infrastructure shows its age.

Marat of U.S. National Defense University noted the broader challenges posed by surveillance tech. “We saw during the Covid-19 pandemic how quickly such tech can be adapted to other purposes such as enforcing lockdowns and tracing people’s whereabouts.”

“Such technology could easily be used against protest leaders too,” she added.

In January 2022, instability triggered by rising energy prices resulted in the government issuing “shoot to kill” orders against protesters — more than 200 people were killed in the ensuing clashes. The human rights news and advocacy outlet Bitter Winter wrote at the time that China had sent a video analytics team to Kazakhstan to use cameras it had supplied to identify and arrest protesters. Anonymous sources in their report alleged that the facial profiles of slain protesters were later compared with the facial data of individuals who appeared in surveillance video footage of riots, in an effort to justify government killings of “terrorists.”

With security forming a central promise of the smart city model, broad public surveillance is all but guaranteed. The head of Tengri Lab, the company leading the development of Smart Aqkol, has said in past interviews that school security was a key motivation behind the company’s decision to spearhead the use of artificial intelligence-powered cameras.

“After the high-profile incident in Kerch, we added the ability to automatically detect weapons,” he said, referencing a mass shooting at a college in Russian-occupied Crimea that left more than 20 people dead in October 2018. In that same speech he made an additional claim: “All video cameras in the city automatically detect massive clusters of people,” a veiled reference to the potential for this technology to be used against protesters.

Soon, there will be more smart city systems across Kazakhstan. Smart Aqkol and Kazakhtelecom have signed memorandums of understanding with Almaty, home to almost 2 million people, and Karaganda, with half a million, to develop similar systems. “The mayor of Karaganda was impressed by our technology and capabilities, but he was mainly interested in the surveillance cameras,” Kirpichnikov told me.

As to the question of whether these systems share data with Chinese officials, “we simply don’t have a clear answer on who has the data and how it is used,” Marat told me. “We can’t say definitively whether China has access but we know its companies are extremely dependent on the Chinese state.”

When I reached out to Tengri Lab to ask whether there are concerns regarding the safety of private data connected to the project, the company declined to comment.

Residents of Aqkol.

What does all this mean for Aqkol? The village is so small that the faces captured on camera are rarely those of strangers. The analysts told me they recognize most of the town’s 13,000 inhabitants between them. I asked whether this makes people uncomfortable, knowing their neighbors are watching them at all times.

Danir, a born-and-raised Aqkol analyst in the situation room, told me he doesn’t believe the platform will be abused. “All my friends and family know I am watching from this room and keeping them safe,” he said. “I don’t think anybody feels threatened — we are their friends, their neighbors.”

“People fear what they don’t understand and people complain about the cameras until they need them,” said Kirpichnikov. “There was a woman once who spoke publicly against the project but after we returned her lost handbag — after we spotted it on a camera — she started to see the benefits of what we are building here.”

After a few years with the system up and running, “it’s normal,” said Danir with a shrug. “Nobody has complained to me.”

For regular people, it doesn’t mean a whole lot. And that may be OK, at least for now. As Irina, the young school teacher whom I met on the village’s main thoroughfare, put it: “I don’t really know what a smart city is, but I like living here. They say we’re safer and my bills are lower than they used to be, and I’m happy.”

The post The smart city where everybody knows your name appeared first on Coda Story.

]]>
When AI doesn’t speak your language https://www.codastory.com/authoritarian-tech/artificial-intelligence-minority-language-censorship/ Fri, 20 Oct 2023 14:07:03 +0000 https://www.codastory.com/?p=47275 Better tech could do a lot of good for minority language speakers — but it could also make them easier to surveil

The post When AI doesn’t speak your language appeared first on Coda Story.

]]>
If you want to send a text message in Mongolian, it can be tough – it’s a script that most software doesn’t recognize. But for some people in Inner Mongolia, an autonomous region in northern China, that’s a good thing.

When authorities in Inner Mongolia announced in 2020 that the language would no longer be the language of instruction in schools, ethnic Mongolians — who make up about 18% of the population — feared the loss of their language, one of the last remaining markers of their distinctive identity. The news and then plans for protest flowed across WeChat, China’s largest messaging service. Parents were soon marching by the thousands in the streets of the local capital, demanding that the decision be reversed.

Why did we write this story?

The AI industry so far is dominated by technology built by and for English speakers. This story asks what the technology looks like for speakers of less common languages, and how that might change in the near term.

With the remarkable exception of the so-called Zero Covid protests of 2022, demonstrations of any size are incredibly rare in China, partially because online surveillance prevents large numbers of people from openly discussing sensitive issues in Mandarin, much less planning public marches. With automated surveillance technologies having a hard time with Mongolian though, protestors had the advantage of being able to coordinate with relative freedom. 

Most of the world’s writing systems have been digitized using centralized standard code (known as Unicode), but the Mongolian script was encoded so sloppily that it is barely usable. Instead, people use a jumble of competing, often incompatible programs when they need to type in Mongolian. WeChat has a Mongolian keyboard, but it’s unwieldy and users often prefer to send each other screenshots of text instead. The constant exchange of images is inconvenient, but it has the unintended benefit of being much more complicated for authorities to monitor and censor.

All but 60 of the world’s roughly 7,000 languages are considered “low-resource” by artificial intelligence researchers. Mongolian belongs to the vast majority of languages barely represented on the internet whose speakers deal with many challenges resulting from the predominance of English on the global internet. As technology improves, automated processes across the internet — from search engines to social media sites — may start to work a lot better for under-resourced languages. This could do a lot of good, giving those language speakers access to all kinds of tools and markets, but it will likely also reduce the degree to which languages like Mongolian fly under the radar of censors. The tradeoff for languages that have historically hovered on the margins of the internet is between safety and convenience on one hand, and freedom from censorship and intrusive eavesdropping on the other.

Back in Inner Mongolia, when parents were posting on WeChat about their plans to protest, it became clear that the app’s algorithms couldn’t make sense of the jpegs of Mongolian cursive, said Soyonbo Borjgin, a local journalist who covered the protests. The images and the long voice messages that protesters would exchange were protected by the Chinese state’s ignorance — there were no AI resources available to monitor them, and overworked police translators had little chance of surveilling all possibly subversive communication. 

China’s efforts to stifle the Mongolian language within its borders have only intensified since the protests. Keen on the technological dimensions of the battle, Borjgin began looking into a machine learning system that was being developed at Inner Mongolia University. The system would allow computers to read images of the Mongolian script, after being fed and trained on digital reams of printed material that had been published when Mongolian still had Chinese state support. While reporting the story, Borjgin was told by the lead researcher that the project had received state money. Borjgin took this as a clear signal: The researchers were getting funding because what they were doing amounted to a state security project. The technology would likely be used to prevent future dissident organizing.

First-graders on the first day of school in Hohhot, Inner Mongolia Autonomous Region of China in August 2023. Liu Wenhua/China News Service/VCG via Getty Images.

Until recently, AI has only worked well for the vanishingly small number of languages with large bodies of texts to train the technology on. Even national languages with hundreds of millions of speakers, like Bangla, have largely remained outside the priorities of tech companies. Last year, though, both Google and Meta announced projects to develop AI for under-resourced languages. But while newer AI models are able to generate some output in a wide set of languages, there’s not much evidence to suggest that it’s high quality. 

Gabriel Nicholas, a research fellow at the Center for Democracy and Technology, explained that once tech companies have established the capacity to process a new language, they have a tendency to congratulate themselves and then move on. A market dominated by “big” languages gives them little incentive to keep investing in improvements. Hellina Nigatu, a computer science PhD student at the University of California, Berkeley, added that low-resource languages face the risk of “constantly trying to catch up” — or even losing speakers — to English.

Researchers also warn that even as the accuracy of machine translation improves, language models miss out on important, culturally specific details that can have real-world consequences. Companies like Meta, which partially rely on AI to review social media posts for things like hate speech and violence, have run into problems when they try to use the technology for under-resourced languages. Because they’ve been trained on just the few texts available, their AI systems too often have an incomplete picture of what words mean and how they’re used.

Arzu Geybulla, an Azerbaijani journalist who specializes in digital censorship, said that one problem with using AI to moderate social media content in under-resourced languages is the “lack of understanding of cultural, historical, political nuances in the way the language is being used on these platforms.” In Azerbaijan, where violence against Armenians is regularly celebrated online, the word “Armenian” itself is often used as a slur to attack dissidents. Because the term is innocuous in most other contexts, it’s easy for AI and even non-specialist human moderators to overlook its use. She also noted that AI used by social media platforms often lumps the Azerbaijani language together with languages spoken in neighboring countries: Azerbaijanis frequently send her screenshots of automated replies in Russian or Turkish to the hate speech reports they’d submitted in Azerbaijani.

But Geybulla believes improving AI for monitoring hate speech and incitement in Azerbaijani will lock in an essentially defective system. “I’m totally against training the algorithm,” she told me. “Content moderation needs to be done by humans in all contexts.” In the hands of an authoritarian government, sophisticated AI for previously neglected languages can become a tool for censorship. 

According to Geybulla, Azerbaijani currently has such “an old school system of surveillance and authoritarianism that I wouldn’t be surprised if they still rely on Soviet methods.” Given the government’s demonstrated willingness to jail people for what they say online and to engage in mass online astroturfing, she believes that improving automated flagging for the Azerbaijani language would only make the repression worse. Instead of strengthening these easily abusable technologies, she argues that companies should invest in human moderators. “If I can identify inauthentic accounts on Facebook, surely someone at Facebook can do that too, and faster than I do,” she said. 

Different languages require different approaches when building AI. Indigenous languages in the Americas, for instance, show forms of complexity that are hard to account for without either large amounts of data — which they currently do not have — or diligent expert supervision. 

One such expert is Michael Running Wolf, founder of the First Languages AI Reality initiative, who says developers underestimate the challenge of American languages. While working as a researcher on Amazon’s Alexa, he began to wonder what was keeping him from building speech recognition for Cheyenne, his mother’s language. Part of the problem, he realized, was computer scientists’ unwillingness to recognize that American languages might present challenges that their algorithms couldn’t understand. “All languages are seen through the lens of English,” he told me.

Running Wolf thinks Anglocentrism is mostly to blame for the neglect that Indigenous languages have faced in the tech world. “The AI field, like any other space, is occupied by people who are set in their ways and unintentionally have a very colonial perspective,” he told me. “It’s not as if we haven’t had the ability to create AI for Indigenous languages until today. It’s just no one cares.” 

American languages were put in this position deliberately. Until well into the 20th century, the U.S. government’s policy position on Indigenous American languages was eradication. From 1860 to 1978, tens of thousands of children were forcibly separated from their parents and kept in boarding schools where speaking their mother tongues brought beatings or worse. Nearly all Indigenous American languages today are at immediate risk of extinction. Running Wolf hopes AI tools like machine translation will make Indigenous languages easier to learn to fluency, making up for the current lack of materials and teachers and reviving the languages as primary means of communication.

His project also relies on training young Indigenous people in machine learning — he’s already held a coding boot camp on the Lakota reservation. If his efforts succeed, he said, “we’ll have Indigenous peoples who are the experts in natural language processing.” Running Wolf said he hopes this will help tribal nations to build up much-needed wealth within the booming tech industry.

The idea of his research allowing automated surveillance of Indigenous languages doesn’t scare Running Wolf so much, he told me. He compared their future online to their current status in the high school basketball games that take place across North and South Dakota. Indigenous teams use Lakota to call plays without their opponents understanding. “And guess what? The non-Indigenous teams are learning Lakota so that they know what the Lakota are doing,” Running Wolf explained. “I think that’s actually a good thing.”

The problem of surveillance, he said, is “a problem of success.” He hopes for a future in which Indigenous computer scientists are “dealing with surveillance risk because the technology’s so prevalent and so many people speak Chickasaw, so many people speak Lakota or Cree, or Ute — there’s so many speakers that the NSA now needs to have the AI so that they can monitor us,” referring to the U.S. National Security Agency, infamous for its snooping on communications at home and abroad.

Not everyone wishes for that future. The Cheyenne Nation, for instance, wants little to do with outsiders, he told me, and isn’t currently interested in using the systems he’s building. “I don’t begrudge that perspective because that’s a perfectly healthy response to decades, generations of exploitation,” he said.

Like Running Wolf, Borjgin believes that in some cases, opening a language up to online surveillance is a sacrifice necessary to keep it alive in the digital era. “I somewhat don’t exist on the internet,” he said. Because their language has such a small online culture, he said, “there’s an identity crisis for Mongols who grew up in the city,” pushing them instead towards Mandarin. 

Despite the intense political repression that some of China’s other ethnic minorities face, Borjgin said, “one thing I envy about Tibetan and Uyghur is once I ask them something they will just google it with their own input system and they can find the result in one second.” Even though he knows that it will be used to stifle dissent, Borjgin still supports improving the digitization of the Mongol script: “If you don’t have the advanced technology, if it only stays to the print books, then the language will be eradicated. I think the tradeoff is okay for me.”

The post When AI doesn’t speak your language appeared first on Coda Story.

]]>
Indian journalists are being treated like terrorists for doing their jobs https://www.codastory.com/authoritarian-tech/newsclick-raids-press-freedom-decline-india/ Thu, 12 Oct 2023 11:23:01 +0000 https://www.codastory.com/?p=47096 Accused of receiving Chinese funding, the founder of a digital newsroom critical of the Modi government faces terrorism charges

The post Indian journalists are being treated like terrorists for doing their jobs appeared first on Coda Story.

]]>
When India hosted the G20 summit last month, it presented itself as the “mother of democracy” to the parade of leaders and delegations from the world’s largest economies. But at home, when the world is not watching as closely, Prime Minister Narendra Modi is systematically clamping down on free speech.

In a dramatic operation that began as the sun rose on Delhi on October 3, police raided the homes of journalists across the city. Police seized laptops and mobile phones, and interrogated reporters about stories they had written and any money they might have received from foreign bank accounts. The journalists targeted by the police work for NewsClick, a small but influential website founded in 2009 by Prabir Purkayastha, an engineer by training who is also a prominent advocate for left-wing causes and ideas. 

At the time of publication, Purkayastha and a senior NewsClick executive had been held in judicial custody for 10 days. The allegations they face are classified under India’s 2019 Unlawful Activities (Prevention) Act, legislation that gives the government sweeping powers to combat terrorist activity. 

Purkayastha, a journalist of considerable standing, is effectively being likened to a terrorist.

Reporters surround NewsClick’s founder and editor Prabir Purkayastha as he is led away by the Delhi police. NewsClick is accused of accepting funds to spread Chinese propaganda. Raj K Raj/Hindustan Times via Getty Images.

The day after the raids on the more than 40 NewsClick employees and contributors, a meeting was called at the Press Club of India. Among the many writers and journalists in attendance was the internationally celebrated, Booker Prize-winning author Arundhati Roy. A longtime critic of Indian government policies, regardless of the political party in power, Roy told me that India was in “an especially dangerous moment.” 

She argued that the Modi government was deliberately conflating terrorism and journalism, that they were cracking down on what they described as “intellectual terrorism and narrative terrorism.” It has to do, she told me, “with changing the very nature of the Indian constitution and the very understanding of checks and balances.” She said the targeting of NewsClick, which has about four million YouTube subscribers, was intended as a warning against digital publications.

The Indian government had targeted NewsClick before, investigating what it said were illegal sources of foreign funding from China. For these latest raids, the catalyst appears to have been, at least in part, an investigation published in The New York Times in August that connected NewsClick to Neville Roy Singham, an Indian-American tech billionaire who, the story alleges, has funded the spread of Chinese propaganda through a “tangle of nonprofit groups and shell companies.”

In the lengthy article, The New York Times reporters made only brief mention of NewsClick, claiming that the site “sprinkled its coverage with Chinese government talking points.” They also quoted a phrase from a video that NewsClick published in 2019 about the 70th anniversary of the 1949 revolution which ended with the establishment of the People’s Republic of China: “China’s history continues to inspire the working classes.” But it appeared to be enough for the Delhi police to seize equipment from and intimidate even junior staff members, cartoonists and freelance contributors to the site. 

Angered by the unintended consequences of The New York Times report, a knot of protestors gathered outside its New York offices near Times Square a couple of days after the raids. Kavita Krishnan, an author and self-described Marxist feminist, wrote on the Indian news and commentary website Scroll that she had warned The New York Times reporters who had contacted her for comment on the Singham investigation that their glancing reference to NewsClick would give the Modi government ammunition to harass Indian journalists. 

The “NYT needs to hold its own practices up to scrutiny and ask itself if, in this case, they have allowed themselves to become a tool for authoritarian propaganda and criminalization of journalism in India,” she wrote

While The New York Times stood by its story, a Times spokesperson told Scroll that they “would find it deeply troubling and unacceptable if any government were to use our reporting as an excuse to silence journalists.”

On October 10, a Delhi court ordered that Purkayastha and NewsClick’s human resources head Amit Chakraborty be held in judicial custody for 10 days, even as their lawyers insisted that there was no evidence that NewsClick had “received any funding or instructions from China or Chinese entities.”

India’s difficult relationship with China is at a particularly low ebb, with tens of thousands of troops amassed along their disputed borders and diplomats and journalists on both sides frequently expelled. From a Western point of view, India is also being positioned as a strategically vital counterweight to Chinese dominance of the Indo-Pacific region. Though diplomatic tensions are high, India’s trade with China has — until a 0.9% drop in the first half of this year — flourished, reaching a record $136 billion last year. 

While the Indian government continues to court Chinese investment, it is suspicious of the Chinese smartphone industry — which controls about 70% of India’s smartphone market — and of any foreign stake in Indian media groups. The mainstream Indian media is increasingly controlled by corporate titans close to Modi. For instance, Mukesh Ambani and Gautam Adani, who control vast conglomerates that touch on everything from cooking oil and fashion to petroleum oil and infrastructure and who have at various points in the last year been two of the 10 richest men in the world, also own major news networks. 

By March this year, Adani completed his hostile takeover of NDTV, widely considered to have been India’s last major mainstream news network to consistently hold the Modi government to account. Independent journalists and organizations such as NewsClick that report critically on the government are now out of necessity building their own audiences on platforms such as YouTube. Cutting off these organizations’ access to funds, particularly from foreign sources, helps tighten the Modi government’s grip on India’s extensive if poorly funded media. 

Siddharth Varadarajan, a founder of the Indian news website The Wire, said that the actions taken against NewsClick are “an attack on an independent media organization at a time when many media organizations are singing the tune of the government.” It was not a surprise, he told me, that Delhi police were asking NewsClick journalists about their reporting on the farmers’ protests in India between August 2020 and December 2021. “While the government says it is investigating a crime on the level of terrorism, the main goal is to delegitimize and criminalize certain topics and lines of inquiry.”

The allegations against NewsClick’s Purkayastha and Chakraborty are classified under India’s Unlawful Activities (Prevention) Act, controversial legislation intended to give the government sweeping powers to combat terrorist activity. Under the provisions of the act, passed in 2019, the government has the power to designate individuals as terrorists before they are convicted by a court of law. It is a piece of legislation that, as United Nations special rapporteurs noted in a letter to the Indian government, undermines India’s signed commitments to uphold international human rights.

Legislative changes introduced by the Modi government include a new data protection law and a proposed Digital India Act, both of which give it untrammeled access to communications and private data. These laws also formalize its authority to demand information from multinational tech companies — India already leads the world in seeking to block verified journalists from posting content on X, the platform formerly known as Twitter — and even shut down the internet, something that it has done for days and even months on end in states across the country during periods of unrest. 

India’s willingness to clamp down on freedom of information is reflected in its steep slide down the annual World Press Freedom Index. Currently ranked 161 out of 180 countries, India has slipped by 20 places since 2014 when Modi became prime minister. “The violence against journalists, the politically partisan media and the concentration of media ownership all demonstrate that press freedom is in crisis in ‘the world’s largest democracy,’” observes Reporters Without Borders, which compiles the ranking. 

Atul Chaurasia, the managing editor at the Indian digital news platform Newslaundry, told me that “all independent and critical journalists feel genuine fear that tomorrow the government may go after them.” In the wake of the NewsClick raids, Chaurasia described the Indian government as the “father of hypocrisy,” an acerbic reference to the Modi government’s boasts about India’s democratic credentials when world leaders, including U.S. President Joe Biden, arrived in Delhi in September for the G20 summit.

When Biden and Modi held a bilateral meeting in Delhi before the summit began, Reuters reported that “the U.S. press corps was sequestered in a van, out of eyesight of the two leaders — an unusual situation for the reporters and photographers who follow the U.S. President at home and around the world to witness and record his public appearances.” Modi himself, despite being the elected leader of a democracy for nearly 10 years, has never answered questions in a press conference in India. 

Instead, Modi addresses the nation once a month on a radio broadcast titled “Mann ki baat,” meaning “words from the heart.” And he very occasionally gives seemingly scripted interviews to friendly journalists and fawning movie stars. 

As for unfriendly journalists, Purkayastha is currently in judicial custody while a variety of Indian investigative agencies are on what Arundhati Roy called a “fishing expedition,” rooting through journalists’ phones and NewsClick’s finances and tax filings in search of evidence of wrongdoing. Varadarajan of the Wire told me that the message being sent to readers and viewers of NewsClick and other sites intent on holding the Modi government to account was clear: “Don’t trust their content and don’t even think about giving them money because they are raising money for anti-national activities.”

U.S. President Joe Biden and Indian Prime Minister Narendra Modi greet each other at the G20 leaders’ summit in Delhi last month. Evan Vucci/POOL/AFP via Getty Images.

Since my conversation with Roy at the Press Club of India on October 4, it has been reported that she faces the possibility of arrest. 

Delhi’s lieutenant governor — an official appointed by the government and considered the constitutional, if unelected, head of the Indian capital — cleared the way for her to be prosecuted for stating in 2010 that in her opinion, Kashmir, the site of long-running territorial conflict between India and Pakistan, has “never been an integral part of India.” A police complaint was filed 13 years ago, but Indian regulations require state authorities to sign off on prosecutions involving crimes such as hate speech and sedition. Now they have.

Apar Gupta, a lawyer, writer and advocate for digital rights, describes the Modi government’s eagerness to use the law and law enforcement agencies against its critics as “creating a climate of threat and fear.” Young people especially, he told me, have to have “extremely high levels of motivation to follow their principles because practicing journalism now comes with the acute threat of prosecution, of censorship, of trolling, and of adverse reputational and social impacts.”

A young NewsClick reporter, requesting anonymity, told me that “with every knock at the door, I feel like they’ve finally come for me.” They described the paranoia that had gripped their parents: “My father now only contacts me on Signal because it’s end-to-end encrypted. I could never have imagined any of this.”

Following the NewsClick raids, Rajiv Malhotra, an Indian-American Hindu supremacist ideologue, appeared on a major Indian news network to openly call for the Modi government to target even more independent journalists. Malhotra singled out the People’s Archive of Rural India (PARI), a website founded by P. Sainath, an award-winning journalist committed to foregrounding the perspectives of rural and marginalized people. 

On what grounds does Malhotra suggest that the Modi government go after Sainath and PARI? The site, Malhotra told the newscaster, who does not interrupt him, encourages young villagers, Dalits (a caste once referred to as “untouchable”), Muslims and other minorities to “tell their story of dissent and grievances against the nation state.” 

Criticism of the nation and its authorities, in other words, is akin to sowing division. Whether it’s an opinion given in 2010 or a reference to Chinese funding within an article from a newspaper loathed by supporters of Modi and his Hindu nationalist ideology, the Indian government will apparently use any excuse to silence its critics. 

The post Indian journalists are being treated like terrorists for doing their jobs appeared first on Coda Story.

]]>
Silicon Savanna: The workers taking on Africa’s digital sweatshops https://www.codastory.com/authoritarian-tech/kenya-content-moderators/ Wed, 11 Oct 2023 11:11:00 +0000 https://www.codastory.com/stayonthestory/silicon-savannah-taking-on-africas-digital-sweatshops-in-the-heart-of-silicon-savannah/ Content moderators for TikTok, Meta and ChatGPT are demanding that tech companies reckon with the human toll of their enterprise.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>

 Silicon Savanna: The workers taking on Africa’s digital sweatshops

This story was updated at 6:30 ET on October 16, 2023

Wabe didn’t expect to see his friends’ faces in the shadows. But it happened after just a few weeks on the job.

He had recently signed on with Sama, a San Francisco-based tech company with a major hub in Kenya’s capital. The middle-man company was providing the bulk of Facebook’s content moderation services for Africa. Wabe, whose name we’ve changed to protect his safety, had previously taught science courses to university students in his native Ethiopia.

Why did we write this story?

The world’s biggest tech companies today have more power and money than many governments. This story offers a deep dive on court battles in Kenya that could jeopardize the outsourcing model upon which Meta has built its global empire.

Now, the 27-year-old was reviewing hundreds of Facebook photos and videos each day to decide if they violated the company’s rules on issues ranging from hate speech to child exploitation. He would get between 60 and 70 seconds to make a determination, sifting through hundreds of pieces of content over an eight-hour shift.

One day in January 2022, the system flagged a video for him to review. He opened up a Facebook livestream of a macabre scene from the civil war in his home country. What he saw next was dozens of Ethiopians being “slaughtered like sheep,” he said. 

Then Wabe took a closer look at their faces and gasped. “They were people I grew up with,” he said quietly. People he knew from home. “My friends.”

Wabe leapt from his chair and stared at the screen in disbelief. He felt the room close in around him. Panic rising, he asked his supervisor for a five-minute break. “You don’t get five minutes,” she snapped. He turned off his computer, walked off the floor, and beelined to a quiet area outside of the building, where he spent 20 minutes crying by himself.

Wabe had been building a life for himself in Kenya while back home, a civil war was raging, claiming the lives of an estimated 600,000 people from 2020 to 2022. Now he was seeing it play out live on the screen before him.

That video was only the beginning. Over the next year, the job brought him into contact with videos he still can’t shake: recordings of people being beheaded, burned alive, eaten.

“The word evil is not equal to what we saw,” he said. 

Yet he had to stay in the job. Pay was low — less than two dollars an hour, Wabe told me — but going back to Ethiopia, where he had been tortured and imprisoned, was out of the question. Wabe worked with dozens of other migrants and refugees from other parts of Africa who faced similar circumstances. Money was too tight — and life too uncertain — to speak out or turn down the work. So he and his colleagues kept their heads down and steeled themselves each day for the deluge of terrifying images.

Over time, Wabe began to see moderators as “soldiers in disguise” — a low-paid workforce toiling in the shadows to make Facebook usable for billions of people around the world. But he also noted a grim irony in the role he and his colleagues played for the platform’s users: “Everybody is safe because of us,” he said. “But we are not.”  

Wabe said dozens of his former colleagues in Sama’s Nairobi offices now suffer from post-traumatic stress disorder. Wabe has also struggled with thoughts of suicide. “Every time I go somewhere high, I think: What would happen if I jump?” he wondered aloud. “We have been ruined. We were the ones protecting the whole continent of Africa. That’s why we were treated like slaves.”

The West End Towers house the Nairobi offices of Majorel, a Luxembourg-based content moderation firm with over 22,000 employees on the African continent.

To most people using the internet — most of the world — this kind of work is literally invisible. Yet it is a foundational component of the Big Tech business model. If social media sites were flooded with videos of murder and sexual assault, most people would steer clear of them — and so would the advertisers that bring the companies billions in revenue.

Around the world, an estimated 100,000 people work for companies like Sama, third-party contractors that supply content moderation services for the likes of Facebook’s parent company Meta, Google and TikTok. But while it happens at a desk, mostly on a screen, the demands and conditions of this work are brutal. Current and former moderators I met in Nairobi in July told me this work has left them with post-traumatic stress disorder, depression, insomnia and thoughts of suicide.

These “soldiers in disguise” are reaching a breaking point. Because of people like Wabe, Kenya has become ground zero in a battle over the future of content moderation in Africa and beyond. On one side are some of the most powerful and profitable tech companies on earth. On the other are young African content moderators who are stepping out from behind their screens and demanding that Big Tech companies reckon with the human toll of their enterprise.

In May, more than 150 moderators in Kenya, who keep the worst of the worst off of platforms like Facebook, TikTok and ChatGPT, announced their drive to create a trade union for content moderators across Africa. The union would be the first of its kind on the continent and potentially in the world.

There are also major pending lawsuits before Kenya’s courts targeting Meta and Sama. More than 180 content moderators — including Wabe — are suing Meta for $1.6 billion over poor working conditions, low pay and what they allege was unfair dismissal after Sama ended its content moderation agreement with Meta and Majorel picked up the contract instead. The plaintiffs say they were blacklisted from reapplying for their jobs after Majorel stepped in. In August, a judge ordered both parties to settle the case out of court, but the mediation broke down on October 16 after the plaintiffs’ attorneys accused Meta of scuttling the negotiations and ignoring moderators’ requests for mental health services and compensation. The lawsuit will now proceed to Kenya’s employment and labor relations court, with an upcoming hearing scheduled for October 31.

The cases against Meta are unprecedented. According to Amnesty International, it is the “first time that Meta Platforms Inc will be significantly subjected to a court of law in the global south.” Forthcoming court rulings could jeopardize Meta’s status in Kenya and the content moderation outsourcing model upon which it has built its global empire. 

Meta did not respond to requests for comment about moderators’ working conditions and pay in Kenya. In an emailed statement, a spokesperson for Sama said the company cannot comment on ongoing litigation but is “pleased to be in mediation” and believes “it is in the best interest of all parties to come to an amicable resolution.”

Odanga Madung, a Kenya-based journalist and a fellow at the Mozilla Foundation, believes the flurry of litigation and organizing marks a turning point in the country’s tech labor trajectory. 

“This is the tech industry’s sweatshop moment,” Madung said. “Every big corporate industry here — oil and gas, the fashion industry, the cosmetics industry — have at one point come under very sharp scrutiny for the reputation of extractive, very colonial type practices.”

Nairobi may soon witness a major shift in the labor economics of content moderation. But it also offers a case study of this industry’s powerful rise. The vast capital city — sometimes called “Silicon Savanna” — has become a hub for outsourced content moderation jobs, drawing workers from across the continent to review material in their native languages. An educated, predominantly English-speaking workforce makes it easy for employers from overseas to set up satellite offices in Kenya. And the country’s troubled economy has left workers desperate for jobs, even when wages are low.

Sameer Business Park, a massive office compound in Nairobi’s industrial zone, is home to Nissan, the Bank of Africa, and Sama’s local headquarters. But just a few miles away lies one of Nairobi’s largest informal settlements, a sprawl of homes made out of scraps of wood and corrugated tin. The slum’s origins date back to the colonial era, when the land it sits on was a farm owned by white settlers. In the 1960s, after independence, the surrounding area became an industrial district, attracting migrants and factory workers who set up makeshift housing on the area adjacent to Sameer Business Park.

For companies like Sama, the conditions here were ripe for investment by 2015, when the firm established a business presence in Nairobi. Headquartered in San Francisco, the self-described “ethical AI” company aims to “provide individuals from marginalized communities with training and connections to dignified digital work.” In Nairobi, it has drawn its labor from residents of the city’s informal settlements, including 500 workers from Kibera, one of the largest slums in Africa. In an email, a Sama spokesperson confirmed moderators in Kenya made between $1.46 and $3.74 per hour after taxes.

Grace Mutung’u, a Nairobi-based digital rights researcher at Open Society Foundations, put this into local context for me. On the surface, working for a place like Sama seemed like a huge step up for young people from the slums, many of whom had family roots in factory work. It was less physically demanding and more lucrative. Compared to manual labor, content moderation “looked very dignified,” Mutung’u said. She recalled speaking with newly hired moderators at an informal settlement near the company’s headquarters. Unlike their parents, many of them were high school graduates, thanks to a government initiative in the mid-2000s to get more kids in school.

“These kids were just telling me how being hired by Sama was the dream come true,” Mutung’u told me. “We are getting proper jobs, our education matters.” These younger workers, Mutung’u continued, “thought: ‘We made it in life.’” They thought they had left behind the poverty and grinding jobs that wore down their parents’ bodies. Until, she added, “the mental health issues started eating them up.” 

Today, 97% of Sama’s workforce is based in Africa, according to a company spokesperson. And despite its stated commitment to providing “dignified” jobs, it has caught criticism for keeping wages low. In 2018, the company’s late founder argued against raising wages for impoverished workers from the slum, reasoning that it would “distort local labor markets” and have “a potentially negative impact on the cost of housing, the cost of food in the communities in which our workers thrive.”

Content moderation did not become an industry unto itself by accident. In the early days of social media, when “don’t be evil” was still Google’s main guiding principle and Facebook was still cheekily aspiring to connect the world, this work was performed by employees in-house for the Big Tech platforms. But as companies aspired to grander scales, seeking users in hundreds of markets across the globe, it became clear that their internal systems couldn’t stem the tide of violent, hateful and pornographic content flooding people’s newsfeeds. So they took a page from multinational corporations’ globalization playbook: They decided to outsource the labor.

More than a decade on, content moderation is now an industry that is projected to reach $40 billion by 2032. Sarah T. Roberts, a professor of information studies at the University of California at Los Angeles, wrote the definitive study on the moderation industry in her 2019 book “Behind the Screen.” Roberts estimates that hundreds of companies are farming out these services worldwide, employing upwards of 100,000 moderators. In its own transparency documents, Meta says that more than 15,000 people moderate its content in more than 20 sites around the world. Some (it doesn’t say how many) are full-time employees of the social media giant, while others (it doesn’t say how many) work for the company’s contracting partners.

Kauna Malgwi was once a moderator with Sama in Nairobi. She was tasked with reviewing content on Facebook in her native language, Hausa. She recalled watching coworkers scream, faint and develop panic attacks on the office floor as images flashed across their screens. Originally from Nigeria, Malgwi took a job with Sama in 2019, after coming to Nairobi to study psychology. She told me she also signed a nondisclosure agreement instructing her that she would face legal consequences if she told anyone she was reviewing content on Facebook. Malgwi was confused by the agreement, but moved forward anyway. She was in graduate school and needed the money.

A 28-year-old moderator named Johanna described a similar decline in her mental health after watching TikTok videos of rape, child sexual abuse, and even a woman ending her life in front of her own children. Johanna currently works with the outsourcing firm Majorel, reviewing content on TikTok, and asked that we identify her using a pseudonym, for fear of retaliation by her employer. She told me she’s extroverted by nature, but after a few months at Majorel, she became withdrawn and stopped hanging out with her friends. Now, she dissociates to get through the day at work. “You become a different person,” she told me. “I’m numb.”

This is not the experience that the Luxembourg-based multinational — which employs more than 22,000 people across the African continent — touts in its recruitment materials. On a page about its content moderation services, Majorel’s website features a photo of a woman donning a pair of headphones and laughing. It highlights the company’s “Feel Good” program, which focuses on “team member wellbeing and resiliency support.”

According to the company, these resources include 24/7 psychological support for employees “together with a comprehensive suite of health and well-being initiatives that receive high praise from our people,” Karsten König, an executive vice president at Majorel, said in an emailed statement. “We know that providing a safe and supportive working environment for our content moderators is the key to delivering excellent services for our clients and their customers. And that’s what we strive to do every day.”

But Majorel’s mental health resources haven’t helped ease Johanna’s depression and anxiety. She says the company offers moderators in her Nairobi office with on-site therapists who see employees in individual and group “wellness” sessions. But Johanna told me she stopped attending the individual sessions after her manager approached her about a topic she shared in confidentiality with her therapist. “They told me it was a safe space,” Johanna explained, “but I feel that they breached that part of the confidentiality so I do not do individual therapy.” TikTok did not respond to a request for comment by publication.

Instead, she looked for other ways to make herself feel better. Nature has been especially healing. Whenever she can, Johanna takes herself to Karura Forest, a lush oasis in the heart of Nairobi. One afternoon, she brought me to one of her favorite spots there, a crashing waterfall beneath a canopy of trees. This is where she tries to forget about the images that keep her up at night. 

Johanna remains haunted by a video she reviewed out of Tanzania, where she saw a lesbian couple attacked by a mob, stripped naked and beaten. She thought of them again and again for months. “I wondered: ‘How are they? Are they dead right now?’” At night, she would lie awake in her bed, replaying the scene in her mind.

“I couldn’t sleep, thinking about those women.”

Johanna’s experience lays bare another stark reality of this work. She was powerless to help victims. Yes, she could remove the video in question, but she couldn’t do anything to bring the women who were brutalized to safety. This is a common scenario for content moderators like Johanna, who are not only seeing these horrors in real-time, but are asked to simply remove them from the internet and, by extension, perhaps, from public record. Did the victims get help? Were the perpetrators brought to justice? With the endless flood of videos and images waiting for review, questions like these almost always go unanswered.

The situation that Johanna encountered highlights what David Kaye, a professor of law at the University of California at Irvine and the former United Nations special rapporteur on freedom of expression, believes is one of the platforms’ major blindspots: “They enter into spaces and countries where they have very little connection to the culture, the context and the policing,” without considering the myriad ways their products could be used to hurt people. When platforms introduce new features like livestreaming or new tools to amplify content, Kaye continued, “are they thinking through how to do that in a way that doesn’t cause harm?”

The question is a good one. For years, Meta CEO Mark Zuckerberg famously urged his employees to “move fast and break things,” an approach that doesn’t leave much room for the kind of contextual nuance that Kaye advocates. And history has shown the real-world consequences of social media companies’ failures to think through how their platforms might be used to foment violence in countries in conflict.

The most searing example came from Myanmar in 2017, when Meta famously looked the other way as military leaders used Facebook to incite hatred and violence against Rohingya Muslims as they ran “clearance operations” that left an estimated 24,000 Rohingya people dead and caused more than a million to flee the country. A U.N. fact-finding mission later wrote that Facebook had a “determining role” in the genocide. After commissioning an independent assessment of Facebook’s impact in Myanmar, Meta itself acknowledged that the company didn’t do “enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”

Yet five years later, another case now before Kenya’s high court deals with the same issue on a different continent. Last year, Meta was sued by a group of petitioners including the family of Meareg Amare Abrha, an Ethiopian chemistry professor who was assassinated in 2021 after people used Facebook to orchestrate his killing. Amare’s son tried desperately to get the company to take down the posts calling for his father’s head, to no avail. He is now part of the suit that accuses Meta of amplifying hateful and malicious content during the conflict in Tigray, including the posts that called for Amare’s killing.

The case underlines the strange distance between Big Tech behemoths and the content moderation industry that they’ve created offshore, where the stakes of moderation decisions can be life or death. Paul Barrett, the deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business who authored a seminal 2020 report on the issue, believes this distance helped corporate leadership preserve their image of a shiny, frictionless world of tech. Social media was meant to be about abundant free speech, connecting with friends and posting pictures from happy hour — not street riots or civil war or child abuse.

“This is a very nitty gritty thing, sifting through content and making decisions,” Barrett told me. “They don’t really want to touch it or be in proximity to it. So holding this whole thing at arm’s length as a psychological or corporate culture matter is also part of this picture.”

Sarah T. Roberts likened content moderation to “a dirty little secret. It’s been something that people in positions of power within the companies wish could just go away,” Roberts said. This reluctance to deal with the messy realities of human behavior online is evident today, even in statements from leading figures in the industry. For example, with the July launch of Threads, Meta’s new Twitter-like social platform, in July, Instagram head Adam Mosseri expressed a desire to keep “politics and hard news” off the platform.

The decision to outsource content moderation meant that this part of what happened on social media platforms would “be treated at arm’s length and without that type of oversight and scrutiny that it needs,” Barrett said. But the decision had collateral damage. In pursuit of mass scale, Meta and its counterparts created a system that produces an impossible amount of material to oversee. By some estimates, three million items of content are reported on Facebook alone on a daily basis. And despite what some of Silicon Valley’s other biggest names tell us, artificial intelligence systems are insufficient moderators. So it falls on real people to do the work.

One morning in late July, James Oyange, a former tech worker, took me on a driving tour of Nairobi’s content moderation hubs. Oyange, who goes by Mojez, is lanky and gregarious, quick to offer a high five and a custom-made quip. We pulled up outside a high-rise building in Westlands, a bustling central neighborhood near Nairobi’s business district. Mojez pointed up to the sixth floor: Majorel’s local office, where he worked for nine months, until he was let go.

He spent much of his year in this building. Pay was bad and hours were long, and it wasn’t the customer service job he’d expected when he first signed on — this is something he brought up with managers early on. But the 26-year-old grew to feel a sense of duty about the work. He saw the job as the online version of a first responder — an essential worker in the social media era, cleaning up hazardous waste on the internet. But being the first to the scene of the digital wreckage changed Mojez, too — the way he looks, the way he sleeps, and even his life’s direction.

That morning, as we sipped coffee in a trendy, high-ceilinged cafe in Westlands, I asked how he’s holding it together. “Compared to some of the other moderators I talked to, you seem like you’re doing okay,” I remarked. “Are you?”

His days often started bleary-eyed. When insomnia got the best of him, he would force himself to go running under the pitch-black sky, circling his neighborhood for 30 minutes and then stretching in his room as the darkness lifted. At dawn, he would ride the bus to work, snaking through Nairobi’s famously congested roads until he arrived at Majorel’s offices. A food market down the street offered some moments of relief from the daily grind. Mojez would steal away there for a snack or lunch. His vendor of choice doled out tortillas stuffed with sausage. He was often so exhausted by the end of the day that he nodded off on the bus ride home.

And then, in April 2023, Majorel told him that his contract wouldn’t be renewed.

It was a blow. Mojez walked into the meeting fantasizing about a promotion. He left without a job. He believes he was blacklisted by company management for speaking up about moderators’ low pay and working conditions.

A few weeks later, an old colleague put him in touch with Foxglove, a U.K.-based legal nonprofit supporting the lawsuit currently in mediation against Meta. The organization also helped organize the May meeting in which more than 150 African content moderators across platforms voted to unionize.

At the event, Mojez was stunned by the universality of the challenges facing moderators working elsewhere. He realized: “This is not a Mojez issue. These are 150 people across all social media companies. This is a major issue that is affecting a lot of people.” After that, despite being unemployed, he was all in on the union drive. Mojez, who studied international relations in college, hopes to do policy work on tech and data protection someday. But right now his goal is to see the effort through, all the way to the union’s registry with Kenya’s labor department.

Mojez’s friend in the Big Tech fight, Wabe, also went to the May meeting. Over lunch one afternoon in Nairobi in July, he described what it was like to open up about his experiences  publicly for the first time. “I was happy,” he told me. “I realized I was not alone.” This awareness has made him more confident about fighting “to make sure that the content moderators in Africa are treated like humans, not trash,” he explained. He then pulled up a pant leg and pointed to a mark on his calf, a scar from when he was imprisoned and tortured in Ethiopia. The companies, he said, “think that you are weak. They don’t know who you are, what you went through.”

A popular lunch spot for workers outside Majorel’s offices.

Looking at Kenya’s economic woes, you can see why these jobs were so alluring. My visit to Nairobi coincided with a string of July protests that paralyzed the city. The day I flew in, it was unclear if I would be able to make it from the airport to my hotel — roads, businesses and public transit were threatening to shut down in anticipation of the unrest. The demonstrations, which have been bubbling up every so often since last March, came in response to steep new tax hikes, but they were also about the broader state of Kenya’s faltering economy — soaring food and gas prices and a youth unemployment crisis, some of the same forces that drive throngs of young workers to work for outsourcing companies and keep them there.

Leah Kimathi, a co-founder of the Kenyan nonprofit Council for Responsible Social Media, believes Meta’s legal defense in the labor case brought by the moderators betrays Big Tech’s neo-colonial approach to business in Kenya. When the petitioners first filed suit, Meta tried to absolve itself by claiming that it could not be brought to trial in Kenya, since it has no physical offices there and did not directly employ the moderators, who were instead working for Sama, not Meta. But a Kenyan labor court saw it differently, ruling in June that Meta — not Sama — was the moderators’ primary employer and the case against the company could move forward.

“So you can come here, roll out your product in a very exploitative way, disregarding our laws, and we cannot hold you accountable,” Kimathi said of legal Meta’s argument. “Because guess what? I am above your laws. That was the exact colonial logic.”

Kimathi continued: “For us, sitting in the Global South, but also in Africa, we’re looking at this from a historical perspective. Energetic young Africans are being targeted for content moderation and they come out of it maimed for life. This is reminiscent of slavery. It’s just now we’ve moved from the farms to offices.”

As Kimathi sees it, the multinational tech firms and their outsourcing partners made one big, potentially fatal miscalculation when they set up shop in Kenya: They didn’t anticipate a workers’ revolt. If they had considered the country’s history, perhaps they would have seen the writing of the African Content Moderator’s Union on the wall.

Kenya has a rich history of worker organizing in resistance to the colonial state. The labor movement was “a critical pillar of the anti-colonial struggle,” Kimathi explained to me. She and other critics of Big Tech’s operations in Kenya see a line that leads from colonial-era labor exploitation and worker organizing to the present day. A workers’ backlash was a critical part of that resistance — and one the Big Tech platforms and their outsourcers may have overlooked when they decided to do business in the country.

“They thought that they would come in and establish this very exploitative industry and Kenyans wouldn’t push back,” she said. Instead, they sued.

What happens if the workers actually win?

Foxglove, the nonprofit supporting the moderators’ legal challenge against Meta, writes that the outcome of the case could disrupt the global content moderation outsourcing model. If the court finds that Meta is the “‘true employer’ of their content moderators in the eyes of the law,” Foxglove argues, “then they cannot hide behind middlemen like Sama or Majorel. It will be their responsibility, at last, to value and protect the workers who protect social media — and who have made tech executives their billions.”

But there is still a long road ahead, for the moderators themselves and for the kinds of changes to the global moderation industry that they are hoping to achieve.

In Kenya, the workers involved in the lawsuit and union face practical challenges. Some, like Mojez, are unemployed and running out of money. Others are migrant workers from elsewhere on the continent who may not be able to stay in Kenya for the duration of the lawsuit or union fight.

The Moderator’s Union is not yet registered with Kenya’s labor office, but if it becomes official, its members intend to push for better conditions for moderators working across platforms in Kenya, including higher salaries and more psychological support for the trauma endured on the job. And their ambitions extend far beyond Kenya. The network hopes to inspire similar actions in other countries’ content moderation hubs. According to Martha Dark, Foxglove’s co-founder and director, the industry’s working conditions have spawned a cross-border, cross-company organizing effort, drawing employees from Africa, Europe and the U.S.

“There are content moderators that are coming together from Poland, America, Kenya, and Germany talking about what the challenges are that they experience when trying to organize in the context of working for Big Tech companies like Facebook and TikTok,” she explained.

Still, there are big questions that might hinge on the litigation’s ability to transform the moderation industry. “It would be good if outsourced content reviewers earned better pay and were better treated,” NYU’s Paul Barrett told me. “But that doesn’t get at the issue that the mother companies here, whether it’s Meta or anybody else, is not hiring these people, is not directly training these people and is not directly supervising these people.” Even if the Kenyan workers are victorious in their lawsuit against Meta, and the company is stung in court, “litigation is still litigation,” Barrett explained. “It’s not the restructuring of an industry.”

So what would truly reform the moderation industry’s core problem? For Barrett, the industry will only see meaningful change if companies can bring “more, if not all of this function in-house.”

But Sarah T. Roberts, who interviewed workers from Silicon Valley to the Philippines for her book on the global moderation industry, believes collective bargaining is the only pathway forward for changing the conditions of the work. She dedicated the end of her book to the promise of organized labor.

“The only hope is for workers to push back,” she told me. “At some point, people get pushed too far. And the ownership class always underestimates it. Why does Big Tech want everything to be computational in content moderation? Because AI tools don’t go on strike. They don’t talk to reporters.”

Artificial intelligence is part of the content moderation industry, but it will probably never be capable of replacing human moderators altogether. What we do know is that AI models will continue to rely on human beings to train and oversee their data sets — a reality Sama’s CEO recently acknowledged. For now and the foreseeable future, there will still be people behind the screen, fueling the engines of the world’s biggest tech platforms. But because of people like Wabe and Mojez and Kauna, their work is becoming more visible to the rest of us.

While writing this piece, I kept returning to one scene from my trip to Nairobi that powerfully drove home the raw humanity at the base of this entire industry, powering the whole system, as much as the tech scions might like to pretend otherwise. I was in the food court of a mall, sitting with Malgwi and Wabe. They were both dressed sharply, like they were on break from the office: Malgwi in a trim pink dress and a blazer, Wabe in leather boots and a peacoat. But instead, they were just talking about how work ruined them.

At one point in the conversation, Wabe told me he was willing to show me a few examples of violent videos he snuck out while working for Sama and later shared with his attorney. If I wanted to understand “exactly what we see and moderate on the platform,” Wabe explained, the opportunity was right in front of me. All I had to do was say yes.

I hesitated. I was genuinely curious. A part of me wanted to know, wanted to see first-hand what he had to deal with for more than a year. But I’m sensitive, maybe a little breakable. A lifelong insomniac. Could I handle seeing this stuff? Would I ever sleep again?

It was a decision I didn’t have to make. Malgwi intervened. “Don’t send it to her,” she told Wabe. “It will traumatize her.”

So much of this story, I realized, came down to this minute-long exchange. I didn’t want to see the videos because I was afraid of how they might affect me. Malgwi made sure I didn’t have to. She already knew what was on the other side of the screen.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>
Meta cozies up to Vietnam, censorship demands and all https://www.codastory.com/authoritarian-tech/vietnam-censorship-facebook/ Thu, 28 Sep 2023 15:25:58 +0000 https://www.codastory.com/?p=46764 U.S. social media companies have become indispensable partners in Vietnam's information control regime

The post Meta cozies up to Vietnam, censorship demands and all appeared first on Coda Story.

]]>
When Vietnamese Prime Minister Pham Minh Chinh and his delegation visited Meta’s Menlo Park headquarters in California last week, they were welcomed with a board reminiscent of Facebook’s desktop interface.

“What’s on your mind?” it read at the top. Beneath the standard status update prompt were a series of messages written in Vietnamese that extended a warm welcome to the prime minister, underscoring the collaboration between his government and the social media giant. Sunny statements are reported to have dominated the meeting in which the two sides rhapsodized about bolstering their partnership.

Prime Minister Chinh highlighted the instrumental role American companies, Meta in particular, might play in uncorking the potentials of the Comprehensive Strategic Partnership that the U.S. and Vietnam cemented in mid-September. He encouraged Meta to deepen its ties with Vietnamese firms to boost the digital economy. Joel Kaplan, Meta’s vice president for U.S. public policy, indicated willingness to support Vietnamese businesses of all sizes, adding that the company hopes to continue producing “metaverse equipment” in the country. 

The warm aura of the meeting obscured an uncomfortable reality for Meta on the other side of the Pacific: It has become increasingly enmeshed in the Vietnamese government’s draconian online censorship regime. In a country whose leaders once frowned upon it, Facebook has seen its relationship with the Vietnamese government morph from one of animosity to an unlikely alliance of convenience. Not a small feat for the social media giant.

Facebook has long been the most popular social media platform in Vietnam. Today, over 70% of Vietnam’s total population of nearly 100 million people use it for content sharing, business operations and messaging.

For years, Facebook’s approach to content policy in Vietnam appeared to be one of caution, in which the company brought some adherence to free speech principles to decision-making when it was faced with censorship demands from the government. But in 2020, it shifted to one of near-guaranteed compliance with official demands, at least in the eyes of Vietnamese authorities. It was in that year that the Vietnamese government claimed that the company went from approving 70 to 75%% of censorship requests from the authorities, to a staggering 95%. Since then Vietnamese officials have maintained that Facebook’s compliance rate is upwards of 90%.

Meta’s deference to Vietnam’s official line continues today. Last June, an article in the Washington Post quoted two former employees who, speaking on the condition of anonymity, said that Facebook had taken on an internal list of Vietnam Communist Party officials who it agreed to shield from criticism on its platform. The undisclosed list is included in the company’s internal guidelines for moderating online content, with Vietnamese authorities having a significant sway on it, the Post reported. While the Post did not cite the names of the Vietnamese officials on the list, it noted that Vietnam is the only country in East Asia for which Facebook provides this type of white-glove treatment.

Also in June, the government instructed cross-border social platforms to employ artificial intelligence models capable of automatically detecting and removing “toxic” content. A month earlier, in the name of curbing online scams, the authorities said they were gearing up to enforce a requirement that all social media users, whether on local or foreign platforms, verify their identities.

These back-to-back developments are emblematic of the Vietnamese government’s growing confidence in asserting its authority over Big Tech.

Facebook’s corporate headquarters location in Menlo Park, California. Josh Edelson/AFP via Getty Images.

How has Vietnam reached this critical juncture? Two key factors seem to account for why Vietnamese authorities are able to boss around Big Tech.

The first is Vietnam’s economic lure. Vietnam’s internet economy is one of the most rapidly expanding markets in Southeast Asia. According to a report by Google and Singapore’s Temasek Holdings, Vietnam’s digital economy hit $23 billion in 2022 and is projected to reach approximately $50 billion by 2025, with growth fueled primarily by a thriving e-commerce sector. 

Dangling access to a market of nearly 100 million people, Vietnamese authorities have become increasingly adept at exploiting their economic leverage to browbeat Big Tech companies into compliance. Facebook’s 70 million users aside, DataReportal estimates that YouTube has 63 million users and TikTok around 50 million in Vietnam.

Although free speech principles were foundational for major American social media platforms, it may be naive to expect them to adhere to any express ideological value proposition at this stage. Above all else, they prioritize rapid growth, outpacing competitors and solidifying their foothold in online communication and commerce. At the end of the day, it is the companies’ bottom line that has dictated how Big Tech operates across borders.

Alongside market pressures, Vietnam has also gained leverage through its own legal framework. Big Tech companies have recognized that they need to adhere to local laws in the countries where they operate, and the Vietnamese government has capitalized on this, amping up its legal arsenal to tighten its grip on cyberspace, knowing full well that Facebook, along with YouTube and TikTok, will comply. Nowhere is this tactic more manifest than in the crackdown on what the authorities label as anti-state content. 

Over the past two decades, the crackdown on anti-state content has shaped the way Vietnamese authorities deployed various online censorship strategies, while also dictating how a raft of laws and regulations on internet controls were formulated and enforced. From Hanoi’s perspective, anti-state content can undermine national prestige, besmirch the reputation of the ruling Communist Party and slander and defame Vietnamese leaders.

There is one other major benefit that the government derives from the big platforms: it uses them to promote its own image. Like China, Vietnam has since 2017 deployed a 10,000-strong military cyber unit tasked to manipulate online discourse to enforce the Communist Party’s line. The modus operandi of Vietnam’s cyber troops has been to ensure “a healthy cyberspace” and protect the regime from “wrong,” “distorting,” or “false news,” all of which are in essence “anti-state” content in the view of the authorities.

And the biggest companies now readily comply. A majority of online posts that YouTube and Facebook have restricted or removed at the behest of Vietnamese authorities were related to  “government criticism” or ones that “oppose the Communist Party and the Government of Vietnam,” according to the transparency reports by Google and Facebook.

The latest data disclosed by Vietnam’s Ministry of Information and Communications indicates that censorship compliance rates by Facebook and YouTube both exceed 90%.

In this context, Southeast Asia provides a compelling case study. Notably, four of the 10 countries with the highest number of Facebook users worldwide are also in Southeast Asia: Indonesia, the Philippines, Vietnam and Thailand. Across the region, censorship requests have pervaded the social media landscape and redefined Big Tech-government relations. 

“Several governments in the region have onerous regulation that compels digital platforms to adhere to strict rules over what content is or isn’t allowed to be on the platform,” Kian Vesteinsson, an expert on technology and democracy at Freedom House, told me. “Companies that don’t comply with these rules may risk fines, criminal or civil liability, or even outright bans or blocks,” Vesteinsson said.

But a wholesale ban on any of the biggest social platforms feels highly improbable today. These companies have become indispensable partners in Vietnam’s online censorship regime, to the point that the threat of shutting them down is more of a brinkmanship tactic than a realistic option. In other words, they are too important to Vietnam to be shut down. And the entanglement goes both ways — for Facebook and Google, the Vietnamese market is too lucrative for them to back out or resist censorship demands.

To wit, after Vietnam threatened to block Facebook in 2020 over anti-government posts, the threat never materialized. And Facebook has largely met the demands of Vietnamese authorities ever since.

Last May, TikTok faced a similar threat. Vietnam launched a probe into TikTok’s operations in Vietnam, warning that any failure to comply with Vietnamese regulations could see the platform shown the door in this lucrative market. While the outcome of the inspection is pending and could be released any time, there are already signs that TikTok, the only foreign social media platform to have set up shop in Vietnam, will do whatever it takes to get on the good side of Vietnamese authorities. In June, TikTok admitted to its wrongdoings in Vietnam and pledged to take corrective actions.

The fuss that Vietnamese authorities have made about both Facebook and TikTok has likely masked their real intent: to further strong-arm these platforms into becoming more compliant and answerable to Vietnamese censors. Judging by their playbook, Vietnamese authorities are likely to continue wielding the stick of shutdown as a pretext to tighten the grip on narratives online, fortify state controls on social media and solidify the government’s increasing leverage over Big Tech.

Could a different kind of platform emerge in this milieu? Vietnam’s economy of scale would scarcely allow for this kind of development: The prospect of building a more robust domestic internet ecosystem that could elbow out Facebook or YouTube doesn’t really exist. Absent bigger political and economic changes, Hanoi will remain reliant on foreign tech platforms to curb dissent, gauge public sentiment, discover corrupt behavior by local officials and get out its own messages to its internet-savvy population.

The post Meta cozies up to Vietnam, censorship demands and all appeared first on Coda Story.

]]>
For Arab dissidents, the walls are closing in https://www.codastory.com/authoritarian-tech/arab-dissidents-extradition/ Wed, 27 Sep 2023 13:30:14 +0000 https://www.codastory.com/?p=46595 The Arab League is relying on the little-known Arab Interior Ministers Council to target critics abroad. Now, a former detainee is taking them to court in the U.S.

The post For Arab dissidents, the walls are closing in appeared first on Coda Story.

]]>
In November 2022, Sherif Osman was having lunch with his fiancee, his sister and other family members at a glittering upscale restaurant in Dubai. A former military officer in Egypt and now a U.S. citizen, Osman had traveled to Dubai with his fiancee, Virta, so his family could meet her for the first time.

Toward the end of the meal, Osman got up and said to Virta, “Go ahead and finish up, I’ll go vape outside.” He kissed her on the forehead and walked out the door. 

When Virta came out of the restaurant a few minutes later, she saw Osman talking to two men. Initially, she thought they were talking about parking spots. Then one of them grabbed his arm and started dragging him into a car.

Virta tried to get to Osman but the car sped away, leaving her standing on the side of the road with his family.

Virta, who is originally from Finland, knew that Osman had been making YouTube videos about human rights violations in Egypt, but it was a part of his life she knew little about. Osman left Egypt in 2004 after becoming frustrated with the corruption he witnessed within the government while serving as an air force captain. He is now considered a deserter. Two years after leaving his home country, he set up a YouTube channel, @SherifOsmanClub, where he routinely criticized the Egyptian government. Today, the channel has more than 40,000 subscribers. 

A few weeks before traveling to Dubai, Osman had posted a video calling for Egyptians to capitalize on COP27, the United Nations climate conference due to be held that month in Sharm El-Sheikh, to protest the state’s dismal human rights record and the rising cost of living.

In the car, Osman’s mind was spinning. When they approached a turn on the highway that leads to the international airport he began to panic, fearful that he was on a one-way trip to his grave.

“I have seen very, very, very high-ranking Egyptians that have lived in Dubai and opened their mouths with a different narrative on Egypt, and they were actually put on a flight and shipped out to Egypt,” he said, referring to former Egyptian prime minister Ahmed Shafiq, who was deported from the UAE just days after he announced he was running for president in 2017.

Osman soon realized that he was being taken to the Dubai police headquarters.

Dubai’s central prison where Sherif Osman was detained. Giuseppe Cacace/AFP via Getty Images.

He was escorted through the back entrance of the building. Osman waited for hours while officers moved frantically around the room, giving him no information. When he asked for clarity, they told him to wait and promised to bring him coffee.

“They actually made me coffee,” he told me, laughing. Osman’s sardonic sense of humor comes out in full force when he recounts the ordeal.

Osman was eventually taken from police headquarters to the Dubai Central Prison where he was made to wait while the authorities decided if he would be deported to Egypt. On November 15, Charles McClellan, an officer in the U.S. Consulate in Dubai, told Virta that Interpol had issued a red notice and extradition case number for Osman.

A few days later, Virta sent an email to Radha Stirling in Windsor, a town in southeast England, pleading for assistance. “Sherif’s deportation to Egypt is a death penalty without a fair trial!” Virta wrote.

Stirling, the CEO of an organization called Detained in Dubai, was no stranger to these kinds of cases. Knowing that the United Arab Emirates could extradite a U.S. citizen to Egypt in the dark of night, Stirling acted quickly. She contacted the American embassy to offer advice, tried to rally support from U.S. politicians and sought media coverage of the case.

And then something strange happened. McClellan told Stirling that he’d gotten new information: According to the UAE, Osman was detained on a “red notice” issued by a less well-known organization: the Arab Interior Ministers Council. An Emirati official speaking to The Guardian confirmed the same.

When Osman learned it was not Interpol but rather the Arab Interior Ministers Council pursuing the case, his heart sank. “That’s when I was like, I’m fucked,” he told me.

The Arab League meeting in Cairo on May 7, 2023. Khaled Desouki/AFP via Getty Images.

A body made up of the interior ministries of all 22 Arab League states, the Arab Interior Ministers Council was established in the 1980s to strengthen cooperation between Arab states on internal security and combating crime. In recent years, it has played an increasingly visible role in extradition cases between Arab countries, particularly in cases that appear to be politically-motivated.

Experts I spoke with say that the shift has occurred as some of the Council’s member states, including the UAE and Egypt, have become notorious for abusing Interpol’s system. Although it is often portrayed in the media as an international police force with armed agents and the power to investigate crimes, Interpol is best understood as an electronic bulletin board where states can post “wanted” notices and other information about suspected criminals. Arab League states are increasingly posting red notices via Interpol in an effort to target political opponents, despite Interpol rules expressly prohibiting the practice.

Ted Bromund, a senior research fellow at the Heritage Foundation, thinks tensions surrounding Interpol may be driving increased cooperation within the Council, especially in politically-motivated cases. “My suspicion is that this Arab Ministers Council is basically a reaction to the fact that Interpol is maybe not quite as compliant or as lax as they used to be,” Bromund told me.

It was around 2018, shortly after Washington Post columnist Jamal Khashoggi, a Saudi-born U.S. resident, was murdered in the Saudi Arabian consulate in Turkey, that Abdelrahman Ayyash first heard of the Council. Ayyash is a case manager at the Freedom Initiative, which advocates for people wrongfully detained in the Middle East and North Africa.

Ayyash told me that over the past year he has identified at least nine cases in which the Council was likely involved in the extradition or arrest of political dissidents, with some of them dating as far back as 2016. In one case, Kuwait extradited eight Egyptians to Cairo in 2019 following accusations that they were part of a terrorist cell with links to the Muslim Brotherhood. Ayyash suspects their arrest and deportation stemmed from a notice from the Arab Interior Ministers Council.

In a case highlighted by other advocates from 2019, Morocco extradited activist Hassan al-Rabea to Saudi Arabia after he was arrested on a warrant that The New Arab reported was issued by the Council. Hassan’s brother Munir is wanted by the Saudi government due to his involvement in the country’s 2011 protest movement. Their older brother, Ali, is already in a Saudi prison, where he is facing the death penalty. Another of al-Rabea’s brothers, Ahmed, told me over the phone from Canada that he is now extremely careful about where he travels: “For me, like all my brothers, it is extremely scary to go to any Arab country,” he said.

Agreements enabling more extradition cooperation among Arab states and other nearby countries also are being adopted widely. In 2020, Morocco, Sudan, the UAE and Bahrain signed an agreement with Israel known as the Abraham Accords, which established official relations between the signatories. Since then, Morocco and the UAE in particular have increased their use of repressive technologies developed by Israeli companies when targeting dissidents abroad. Last year, 24% of Israel’s defense exports were to Arab Accords signatories. In 2021, Egypt signed an agreement to strengthen military cooperation with Sudan after years of tensions, including a border dispute. 

Members of the Arab Interior Ministers Council are signatories to the Riyadh Arab Agreement for Judicial Cooperation and the Arab Convention for the Suppression of Terrorism, which prohibit extraditions if the crime is of a “political nature.”

Three U.N. special rapporteurs in June wrote a letter to the Arab League stating that red notices issued by the Council do not comply with member states’ commitments under international law, such as non-refoulement, non-discrimination, due diligence and fair trial.

Saudi Arabian Crown Prince Mohammed bin Salman greets President of Egypt Abdel Fattah El-Sisi ahead of the 32nd Arab League Summit in Jeddah, Saudi Arabia on May 19, 2023. Bandar Aljaloud/Royal Court of Saudi Arabia/Handout/Anadolu Agency via Getty Images.

A few weeks after Osman’s arrest, Virta returned to the U.S. for her job. She adjusted her schedule to work different hours, so she could be awake for part of the night working on his release.

Behind bars in Dubai, Osman was struggling to sleep. “The second I opened my eyes my head would go numb, the exact second my eyes opened, I realized I am in deep shit,” he told me. “I can count the days that I had a full night’s sleep on one hand and have left over fingers.”

Virta was certain the UAE was going to extradite him to Egypt. But then, late one night towards the end of December, she got a call.

“I have some good news,” Osman told her. He was going to be released.

Osman was taken to the airport five days later, but it was not until the plane door closed that he allowed himself to believe he was actually going home. When the door clicked shut, he passed out from exhaustion. Osman had spent 46 days in detention.

This past July, Osman filed a lawsuit at the U.S District Court in Washington, D.C. against Interpol and its major general Ahmed Naser Al-Raisi, the UAE and its deputy prime minister, Egypt and its president Abdel Fattah El-Sisi, the Arab Interior Ministers Council, a UAE prosecutor and four other unnamed individuals. The complaint accuses them of international terrorism for their “kidnapping, abduction, imprisonment, prosecution, and threatened extradition” of Osman.

The 32nd Arab League Summit in Jeddah, Saudi Arabia on May 19, 2023. Bandar Aljaloud/Royal Court of Saudi Arabia/Handout/Anadolu Agency via Getty Images.

The lawsuit accuses Interpol of colluding to shift the justification for Osman’s detention from an Interpol red notice to one issued by the Arab Interior Ministers Council. An Interpol spokesperson said “there is no indication that a notice or diffusion ever existed in Interpol’s databases,” but Osman’s lawyers say otherwise.

Osman hopes that the case will push Interpol to agree to reforms, such as improving its system for reviewing cases in order to determine whether they are politically motivated. If his lawyers can prove that what the Arab Interior Ministers Council did was an act of terrorism, Osman expects this will make it much harder for Arab states to justify their participation in its functions. “Funding it would be very hard at that point,” he said, as it would effectively mean that the Arab league was funding a terrorist organization. One of Osman’s lawyers also is seeking an agreement from the UAE to stop accepting red notices for U.S. citizens by way of the Council.

Osman and Virta now live in a small city in Massachusetts, where they largely keep to themselves. “The speed limit is 35 miles and people don’t say hi to each other. It’s New England, so everybody’s an asshole,” said Osman. “There’s even a word for it: ‘Massholes.’”

He sees a psychologist who specializes in post-traumatic stress disorder. Osman says it is helping him understand what feels like a “new self.”

Osman is trying to launch a cannabis cultivation business, which missed out on some vital funding when investors heard about his arrest. He stayed quiet for six months after his release, but recently went back to posting about Egypt’s human rights record online. 

“I’m back again, talking and tearing down the president and his regime and military regime without mercy,” he said. “I got the news that they are worried in Egypt about my case.”

CORRECTION (09/29/2023): An earlier version of this article described Jamal Khashoggi as a U.S. citizen. It has been corrected to reflect that Khashoggi was a U.S. resident.

The post For Arab dissidents, the walls are closing in appeared first on Coda Story.

]]>
Without space to detain migrants, the UK tags them https://www.codastory.com/authoritarian-tech/uk-gps-tagging-home-office-asylum/ Thu, 21 Sep 2023 14:25:08 +0000 https://www.codastory.com/?p=46581 The Home Office says electronically tracking asylum seekers is a humane alternative to detention. But migrants say it’s damaging their mental health

The post Without space to detain migrants, the UK tags them appeared first on Coda Story.

]]>
The U.K. is presenting asylum seekers with an ultimatum: await deportation and asylum processing in Rwanda, face detention or wear a tracking device. Or leave voluntarily.

As thousands of people continue to arrive in the U.K., the British authorities are scrambling for new ways to monitor and control them. Under the government’s new rules, Britain has a legal duty to detain and deport anyone who arrives on its shores via truck or boat regardless of whether they wish to seek asylum. Passed in July 2023, the Illegal Migration Act has already been described by the United Nations Human Rights Office as “exposing refugees to grave risks in breach of international law.”

More than 20,000 people have come to the U.K. on small boats so far in 2023, and some 175,000 people are already waiting for an asylum decision. But officials say the U.K. does not have the physical space to detain people under the new law. And a public inquiry published this week argued that the U.K. should not detain migrants for more than 28 days. The report found evidence of abusive, degrading and racist treatment of migrants held in a detention center near London’s Gatwick Airport.

With detention centers at capacity and under scrutiny for mistreating migrants, and with the Rwanda scheme facing court challenges, those awaiting deportation or asylum proceedings are increasingly being monitored using technology instead, such as GPS-enabled ankle trackers that allow officials to follow the wearer’s every move. The ankle tracker program, which launched as a pilot in June 2022, was initially scheduled to last 12 months. But this summer, without fanfare, the government quietly uploaded a document to its website with the news that it was continuing the pilot to the end of 2023.

A Home Office spokesperson told me that “the GPS tracking pilot helps to deter absconding.” But absconding rates among migrants coming to the U.K. are low: The Home Office itself reported that they stood at 3% in 2019 and 1% in 2020, in response to a Freedom of Information request filed by the advocacy group Migrants Organize. In other official statements, the Home Office has expressed concern that the Rwanda policy may lead to “an increased risk of absconding and less incentive to comply with any conditions of immigration bail.” So authorities are fitting asylum seekers with GPS tags to ensure they don’t disappear before they can be deported.

Privacy advocates say the policy is invasive, ineffective and detrimental to the mental and physical health of the wearers. 

“Forging ahead, and massively expanding, such a harmful scheme with no evidence to back up its usefulness is simply vindictive,” said Lucie Audibert, a legal officer at the digital rights group Privacy International, which launched a legal challenge against the pilot program last year, arguing there were not adequate safeguards in place to protect people’s basic rights. 

Migrants who have been tagged under the scheme say the experience is dehumanizing. “It feels like an outside prison,” said Sam, a man in his thirties who fled a civil war with his family when he was a small child and has lived in the U.K. ever since. Sam, whose name has been changed, was told by the Home Office at the end of last year that he would need to wear a tag while the government considered whether to deport him after he had served a criminal sentence.

The Home Office has also outsourced the implementation of the GPS tracking system to Capita PLC, a private security company. Capita has been tasked with fitting tags and monitoring the movements and other relevant data collected on each and every person wearing a device. For migrants like Sam, that means dealing with anonymous Capita staff — rather than the government — whenever his tag was being fitted, checked or replaced.

After a month of wearing the tag, Sam felt depression beginning to set in. He was worried about leaving the house, for fear of accidentally bumping the strap. He was afraid that if too many problems arose with the tracker, the Home Office might use it as an excuse to deport him. Another constant anxiety weighed on him too: keeping the device charged. Capita staff told him its battery could last 24 hours. But he soon found out that wasn’t true — and it would lose charge without warning when he was out, vibrating loudly and flashing with a red light.

“Being around people and getting the charger out so you can charge your ankle — it’s so embarrassing,” Sam said. He never told his child that he had been tagged. “I always hid it under tracksuits or jeans,” he said, not wanting to burden his child with the constant physical reminder that he could be deported.

The mental health problems Sam experienced are not unusual for people who have to wear tracking devices. In the U.S., border authorities first deployed ankle monitors in 2014, in response to an influx of migrants from Central America. According to a 2021 study surveying 150 migrants forced to wear the devices, 12% said wearing the tags led to thoughts of suicide, while 40% said they believed they had been psychologically scarred by the experience.

Capita staff regularly showed up at Sam’s home to check on the tag, and they often came at different times than the Home Office told Sam they would come. Sometimes, they would show up without any warning at all. 

Sam remembered an occasion when Capita officers told him that “the system was saying the strap had been tampered with.” The agents examined his ankle and found nothing wrong with the device. This became a routine: The team showed up randomly to tell him there was a problem or that his location wasn’t registering. “It was all these little things that seemed to make out I was doing something wrong. In the end, I realized it wasn’t me, it was the tag that was the problem. I felt harassed,” Sam told me. 

At one point, Sam said he received a letter from the Home Office saying he had breached his bail conditions because he had not been home when the Capita people came calling. According to Home Office documents, breaching bail conditions is a good enough reason for the government to have access to a migrant’s “trail data”: a live inventory of a person’s precise location every minute of the day and night. He’s worried that this tracking data might be used against him as the government deliberates on whether or not to deport him. 

Sam is not alone in dealing with glitches with the tag. In a study of 19 migrants tagged under the British scheme, 15 participants had practical issues with the devices, such as the devices failing or chargers not working. 

When I asked Capita to comment on these findings, the company redirected me to the Home Office, which denied that there were any concerns. “Device issues are rare and service users are provided with a 24-hour helpline to report any problems,” a government spokesperson said. They then added: “Capita’s field and monitoring staff receive safeguarding training and are able to signpost tag wearers to support organizations where appropriate.”

Migration campaigners say contracts like the one Home Office has with Capita serve to line the pockets of big private security companies at the taxpayers’ expense while helping the government push out the message that they’re being tough on immigration.

“Under this government, we have seen a steep rise in the asylum backlog,” said Monish Bhatia, a lecturer in Sociology at the University of York, who studies the effects of GPS tagging. “Instead of directing resources to resolving this backlog,” he told me, “they have come up with rather expensive and wasteful gimmicks.” 

The ankle monitor scheme forms part of Britain’s so-called “hostile environment” policy, introduced more than a decade ago by then-Home Secretary Theresa May, who described it as an effort to “create, here in Britain, a really hostile environment for illegal immigrants.” It has seen the government pour billions of pounds into deterring and detaining migrants — from building a high-tech network of surveillance along the English channel in an attempt to thwart small boat crossings to the 120 million pound ($147 million) deal to deport migrants to Rwanda. 

The Home Office estimates it will have to spend between 3 and 6 billion pounds (between $3.68 and $7.36 billion) on detaining, accommodating and removing migrants over the next two years. But the option to tag people, while cheaper than keeping them locked up, also costs the government significant amounts of money. The U.K. currently has two contracts with security companies for electronically tagging both migrants and those in the criminal justice system. One with G4S, which provides the tag hardware, worth 22 million pounds ($27.5 million) and another with Capita, which runs electronic tagging services for 114 million pounds ($142 million), fitting and troubleshooting the tags.

The Home Office said the GPS tagging scheme would help streamline the asylum process and that it was “determined to break the business model of the criminal people smugglers and prevent people from making dangerous journeys across the Channel.” 

For his part, Sam eventually got his tag removed — he was granted an exception due to the tag’s effects on his mental health. After the tag was gone, he described how he felt like it was still there for weeks. He still put his clothes and shoes on as if the tag was still strapped to his ankle. 

“It took me a while to realize I was actually free from their eyes,” he said. But his status remains uncertain: He is still facing the threat of deportation.

Correction: An earlier version of this article incorrectly stated Monish Bhatia’s affiliation. As of April 2023, he is a lecturer at the University of York, not Birkbeck, University of London.

The post Without space to detain migrants, the UK tags them appeared first on Coda Story.

]]>
For migrants under 24/7 surveillance, the UK feels like ‘an outside prison’ https://www.codastory.com/authoritarian-tech/gps-ankle-tags-uk-migrants-home-office/ Wed, 13 Sep 2023 14:47:54 +0000 https://www.codastory.com/?p=46426 He’s lived in the UK since he was a small child. But the Home Office wants to deport him — and track him wherever he goes

The post For migrants under 24/7 surveillance, the UK feels like ‘an outside prison’ appeared first on Coda Story.

]]>
In June 2022, the U.K. Home Office rolled out a new pilot policy — to track migrants and asylum seekers arriving in Britain with GPS-powered ankle tags. The government argues that ankle tags could be necessary to stop people from absconding or disappearing into the country. Only 1% of asylum seekers absconded in 2020. But that hasn’t stopped the Home Office from expanding the pilot. Sam, whose name we’ve changed to protect his safety, came to the U.K. as a refugee when he was a small child and has lived in Britain ever since. Now in his thirties, he was recently threatened with deportation and was made to wear a GPS ankle tag while his case was in progress. Here is Sam’s story, as told to Coda’s Isobel Cockerell.

I came to the U.K. with my family when I was a young kid, fleeing a civil war. I went to preschool, high school and college here. I’m in my thirties now and have a kid of my own. I don’t know anything about the country I was born in — England is all I know. 

I got my permanent residency when I was little. I remember my dad also started applying for our British citizenship when I was younger but never quite got his head around the bureaucracy. 

When I got older, I got into a lifestyle I shouldn’t have and was arrested and given a criminal sentence and jail time. The funny thing is, just before I was arrested, I had finally saved up enough to start the process of applying for citizenship myself but never got around to it in time.

In the U.K., if you’re not a citizen and you commit a crime, the government has the power to deport you. It doesn’t matter if you’ve lived here all your life. So now, I’m fighting the prospect of being kicked out of the only country I’ve ever known. 

When I finished my sentence, they kept me in prison under immigration powers. When I finally got bail, they said I’d have to wear a GPS-powered ankle tag so that I didn’t disappear. I couldn’t believe it. If I had been a British citizen, when I finished my sentence that would be it, I’d be free. But in the eyes of the government, I was a foreigner, and so the Home Office — immigration — wanted to keep an eye on me at all times. 

My appointments with immigration had a strange quality to them. I could tell from the way we communicated that the officers instinctively knew they were talking to a British person. But the system had told them to treat me like an outsider and to follow the procedures for deporting me. They were like this impenetrable wall, and they treated me like I was nothing because I didn’t have a passport. They tried to play dumb, like they had no idea who I was or that I had been here my whole life, even though I’ve always been in the system.

I tried to explain there was no need to tag me and that I would never abscond. After all, I have a child here who I want to stay with. They decided to tag me anyway.

The day came when they arrived in my holding cell to fit the tag. I was shocked by its bulkiness. I thought to myself, ‘How am I going to cover this up under my jeans?’ I love to train and keep fit, but I couldn’t imagine going to the gym with this thing around my ankle. 

It’s hard to explain what it’s like to wear that thing. When I was first released — after many months inside — it felt amazing to be free, to wake up whenever I wanted and not have to wait for someone to come and open my door.

But gradually, I started to realize I wasn’t really free. And people did come to my door. Not prison guards, but people from a private security company. I later learned that company is called Capita.  When things go wrong with the tag, it’s the Capita people who show up at your home.

The visits were unsettling. I had no idea how much power the Capita people had or whether I was even obliged, legally, to let them in. The employees themselves were a bit clueless. Sometimes I would level with them, and they would admit they had no idea why I was being tagged.

It soon became clear that the technology attached to my ankle was pretty glitchy. One time, they came and told me, ‘The system says the tag had been tampered with.’ They checked my ankle and found nothing wrong. It sent my mind whirring. What had I done to jolt the strap? I suddenly felt anxious to leave the house, in case I knocked it while out somewhere. I began to move through the world more carefully. 

Other times, Capita staff came round to tell me my location had stopped registering. The system wasn’t even functioning, and that frustrated me. 

All these issues seemed to make out like I was the one doing something wrong. But I realize now it was nothing to do with me — the problem was with the tag, and the result was that I felt harassed by these constant unannounced visits by these anonymous Capita employees. 

In theory, the Home Office would call to warn you of Capita’s visits, but often they just showed up at random. They never came when they said they would. Once, I got a letter saying I breached my bail conditions after not being home when they came around. But I’d never been told they were coming in the first place. It was so anxiety-inducing: I was afraid if there were too many problems with the tag, it might be used against me in my deportation case. 

The other nightmare was the charging system. According to the people who fit my tag, the device could last 24 hours between charges. It never did. I’d be out and about or at work, and I’d have to calculate how long I could stay there before I needed to go home and charge. The low battery light would flash red, the device would start loudly vibrating, and I’d panic. Sometimes others would hear the vibration and ask me if it was my phone. Being around people and having to charge up your ankle is so embarrassing. There’s a portable charger, but it’s slow. If you want to charge up quicker, you have to sit down next to a plug outlet for two hours and wait. 

I didn’t want my child to know I’d been tagged or that I was having problems with immigration. I couldn’t bear the thought of trying to explain why I was wearing this thing around my ankle or that I was facing deportation. Whenever we were together I made sure to wear extra-loose jeans. 

I couldn’t think beyond the tag. It was always on my mind, a constant burden. It felt like this physical reminder of all my mistakes in life. I couldn’t focus on my future. I just felt stuck on that day when I was arrested. I had done my time, but the message from the Home Office was clear: There was no rehabilitation, at least not for me. I felt like I was sinking into quicksand, being pulled down into the darkness. 

My world contracted, and my mental health went into freefall. I came to realize I wasn’t really free: I was in an outside prison. The government knew where I was 24/7. Were they really concerned I would abscond, or did they simply want to intrude on my life? 

Eventually, my mental health got so bad I was able to get the tag removed, although I’m still facing deportation.

After the tag was taken off, it took me a while to absorb that I wasn’t being tracked anymore. Even a month later, I still put my jeans on as if I had the tag on. I could still kind of feel it there, around my ankle. I still felt like I was being watched. Of course, tag or no tag, the government always has other ways to monitor you. 

I’ve begun to think more deeply about the country I’ve always called home. This country that says it no longer wants me. The country that wants to watch my every move. I’m fighting all of it to stay with my child, but I sometimes wonder if, in the long term, I even want to be a part of this system, if this is how it treats people.

The post For migrants under 24/7 surveillance, the UK feels like ‘an outside prison’ appeared first on Coda Story.

]]>
Advertising erectile dysfunction pills? No problem. Breast health? Try again https://www.codastory.com/authoritarian-tech/meta-health-ads/ Thu, 07 Sep 2023 13:14:01 +0000 https://www.codastory.com/?p=46363 Women’s health groups say Meta is discriminating against them, while letting men’s sexual health ads flourish

The post Advertising erectile dysfunction pills? No problem. Breast health? Try again appeared first on Coda Story.

]]>
It happened again last week. Lisa Lundy logged into her company’s Instagram account only to be greeted with yet another rejection. This one was an advertisement about breast cancer awareness, featuring a close-up of a woman’s bare decolletage with the caption: “90% of breast cancer diagnoses are not hereditary.” 

Lundy thought the ad could educate social media users about the risk factors for breast cancer, but it never saw the light of day. Instead, Instagram rejected it for violating its policies on nudity and sexual activity.

For more than a year, Lundy’s company, Complex Creatures, has struggled to find a home for its content on Instagram. The platform has rejected scores of the company’s advertisements and posts since its account went live in June 2022. Lundy co-founded Complex Creatures with her sister, a breast cancer survivor, to raise awareness about the disease and provide health and wellness products for women undergoing breast cancer treatment. But the content rejections came rolling in as soon as she started posting. It didn’t take long for Lundy to realize that Meta, owner of Instagram, was nixing her content because of its subject matter: the breast. 

Screenshots of censored posts from the Complex Creatures Instagram account. Courtesy of Lisa Lundy.

“How do you desexualize the breast?” she asked. “It’s so much of what we’re trying to do.” But platforms like Instagram, Lundy said, “don’t want to let us.” In a call over Zoom, she shared some screenshots of her company’s censored content. One was a post about how massages can improve breast health, featuring a photo of a woman’s hands fully covering her breasts. “But they’re allowed to do this,” she sighed, pulling up an advertisement from a men’s health brand for an erectile dysfunction treatment containing an image of a hand clutching an eggplant with the caption: “Get hard.” The censorship, she added, “is an ongoing challenge. We’re talking about breast cancer and breast health.” Access to the right information about the disease and its risk factors, she explained, can be a matter of “life and death.”

The censorship that Lundy routinely confronts on Instagram is part of a deeper history at Meta, which has long faced criticism for censoring material about breasts on Facebook. But it’s not just breast-focused content that’s not getting through. Lundy belongs to a community of nonprofits and startups focused on women’s health that face routine — and often bewildering — censorship across Facebook and Instagram. 

Screenshots of censored posts from the Complex Creatures Instagram account. Courtesy of Lisa Lundy.

I spoke with representatives from six organizations focused on women’s health care globally, and they told me that while Meta regularly approves advertisements for material that promotes men’s sexuality and sexual pleasure, it regularly blocks them from publishing advertisements and posts about a wide range of health and reproductive services aimed at women, including reproductive health, fertility treatments and breast care. Often, these posts are rejected on the grounds that they violate the company’s advertising policies on promoting sexual pleasure and adult content.

This kind of censorship comes at an existential moment for the U.S.-based reproductive rights community after the Supreme Court’s overturning of Roe v. Wade — the nearly 50-year-old ruling that legalized abortion across the U.S. — in 2022. As I reported in March 2023, abortion opponents have sought to clamp down on abortion speech online in the post-Roe era, introducing policies in Texas, Iowa, and South Carolina that would prohibit websites from publishing information about abortion. That’s on top of censorship that reproductive rights groups already face when they try to post content about accessing abortion care on platforms like Instagram and Facebook — even in countries where the procedure is legal. 

According to Emma Clark Gratton, a communications officer for the Australia chapter of the international reproductive health nonprofit MSI Reproductive Choices, the organization is routinely blocked from running ads about abortion services on Facebook, often for violating the company’s advertising policy on social issues, elections, and politics. Abortion is “totally legal” in Australia, Clark Gratton explained, but on Meta’s platforms, it is “still very restricted in terms of what we can post.” The organization’s clinical team in Australia, she added, can advertise for vasectomy services on Facebook, “but they definitely couldn’t do an ad promoting abortion services, which is literally what they do. They’re an abortion provider.”

Women First Digital, a group that provides information resources about abortion globally, has dealt extensively with restrictions on social media networks. Michell Mor, a digital strategy manager with the organization, put it to me this way: “Because big tech is from the United States, everything that happens there is replicated around the world.”

The impact of these restrictions reaches well beyond social media, says Carol Wersbe, chief of staff for the Center for Intimacy Justice, a nonprofit that has been tracking Meta’s rejections of health-related ads. 

“Advertising represents so much more than just a company getting an ad on Facebook,” Wersbe told me. “It’s visibility, access to information. If we can’t advertise for things like pelvic pain and endometriosis, how do we ever reduce the stigma from those topics?” 

In January 2022, the Center for Intimacy Justice published a survey of 60 women’s healthcare startups about their experiences with censorship on Facebook and Instagram. The participating companies offer products and services for a range of women’s healthcare needs, from fertility and pregnancy support to postpartum recovery, menstrual health, and menopause relief. All of the companies surveyed reported having their ads rejected by Instagram and Facebook, and half said their accounts were suspended after Meta removed their ads. According to the report, ads were frequently taken down after they were flagged for promoting “adult products and services,” which are not permitted under the company’s advertising policies.  

Some ads that didn’t make the cut featured products to relieve side effects of menopause; another included background about consent in school sexual education courses. During the same time period, the report points out, Meta approved ads for men’s sexual health products, including treatments for premature ejaculation, erectile dysfunction pills promising to help consumers “get hard or your money back” and men’s lubricants to “level up your solo time.” The platform allowed these ads despite its own rules prohibiting ads from promoting products and services that “focus on sexual pleasure.”

Meta quietly updated its advertising guidelines after the report came out, stating that ads for family planning, contraception, menopause relief, and reproductive health care are allowed. Though the social media giant expanded the scope of permissible advertisements on paper, Wersbe says the status quo remains unchanged. “Across the board, we’re still seeing our partners experiencing rejections,” she explained. The censorship that she and others in the field are observing cuts across languages, markets, and continents. “Facebook’s ads policy is a global policy, so when it changes something it affects their whole user base,” explained Wersbe. “We’ve seen rejections in Arabic, Spanish, French, Swedish, Swahili. It’s really pervasive.”

In March 2023, the organization filed a complaint with the U.S. Federal Trade Commission, urging the agency to investigate whether Meta is engaging in deceptive trade practices by rejecting ads from women’s health organizations that comply with its stated advertising policies, while allowing similar advertisements promoting men’s sexual health. The complaint alleges that the social media giant is unevenly applying its ads rules based on the gender of the target audience. These removals, it argues, constitute discriminatory censorship and perpetuate “inequality of access to health information and services for women and people of underrepresented genders.” 

In reporting this story, I contacted Meta with questions about the Center for Intimacy Justice’s report, the Federal Trade Commission complaint, and the rejection of Lundy’s advertisements. A spokesperson responded and shared the company’s published Community Standards, but declined to comment on the record.

Alexandra Lundqvist told me that alongside the outreach challenges that these issues create, ad rejections also make it harder for women-led health companies to get a leg up among investors. Lundqvist is a communications lead with The Case for Her, an investment firm that funds women’s sexual health organizations worldwide, including the Center for Intimacy Justice. “The general Silicon Valley big tech investor is not going to go to a women’s health company, especially when they can’t really advertise their work because they get blocked all the time. When these companies can’t advertise their work, they can’t scale, they can’t get funding,” Lundqvist explained. That exacerbates inequities that women and nonbinary entrepreneurs already face in securing investments from the male-dominated venture capital industry, creating a negative feedback loop for companies marketing products by and for women. “There is a big systems impact,” she added.

Lundy, who says her breast health company continues to experience widespread rejections despite Meta’s policy update, believes the censorship has a corrosive effect on consumers and creators alike. The content takedowns make it harder for entrepreneurs like herself to reach customers, make money, and attract investors. But they also prevent people from learning potentially life-saving information about breast cancer.

“There’s not a lot of information out there about breast health,” she said, describing her own lack of awareness about the disease prior to her sister’s diagnosis at age 37. “We had no family history,” she told me. “Her gynecologist missed it and she had never had a mammogram.” The experience, she continued, “really illuminated how much we didn’t know about our breasts.”

Lundy and her sister founded the company in part to address the information vacuum that left them both in the dark — to reach people before diagnosis and support those with the disease through treatment. But Meta makes that mission harder. “We want to normalize the breast,” she said, “but it’s almost like the algorithm and the people making the algorithms can’t think about a breast or a woman’s body in any way other than sexuality or arousal.” The censorship that Complex Creature routinely faces for posting material on Instagram about breast health, Lundy told me, “feels like the patriarchal system at work.”

The morning after our call, Lundy emailed me an update: a photo of two squashes meant to resemble breasts hanging side by side — the visual for an Instagram ad about her company’s summer sale. The post, she wrote, “was rejected last night. They’re gourds.”

The post Advertising erectile dysfunction pills? No problem. Breast health? Try again appeared first on Coda Story.

]]>
The Albanian town that TikTok emptied https://www.codastory.com/authoritarian-tech/albania-tiktok-migration-uk/ Thu, 24 Aug 2023 15:28:36 +0000 https://www.codastory.com/?p=42467 “It’s like the boys have gone extinct,” say women in Kukes. They’ve all left for London, chasing dreams of fast cars and easy money sold on social media

The post The Albanian town that TikTok emptied appeared first on Coda Story.

]]>

The Albanian town that TikTok emptied

“I once had an idea in the back of my mind to leave this place and go abroad,” Besmir Billa told me earlier this year as we sipped tea in the town of Kukes, not far from Albania’s Accursed Mountains. “Of course, like everybody else, I’ve thought about it.”

The mountains rose up all around us like a great black wall. Across the valley, we could see a half-constructed, rusty bridge, suspended in mid-air. Above it stood an abandoned, blackened building that served during Albania’s 45-year period of communist rule as a state-run summer camp for workers on holiday. 

The Big Idea: Shifting Borders

Borders are liminal, notional spaces made more unstable by unparalleled migration, geopolitical ambition and the use of technology to transcend and, conversely, reinforce borders. Perhaps the most urgent contemporary question is how we now imagine and conceptualize boundaries. And, as a result, how we think about community.

In this special issue are stories of postcolonial maps, of dissidents tracked in places of refuge, of migrants whose bodies become the borderline, and of frontier management outsourced by rich countries to much poorer ones.

Since the fall of communism in 1991, Kukes has lost roughly half of its population. In recent years, thousands of young people — mostly boys and men — have rolled the dice and journeyed to England, often on small boats and without proper paperwork. 

Fifteen years ago, people would come to Kukes from all over the region for market day, where they would sell animals and produce. The streets once rang with their voices. Those who’ve lived in Kukes for decades remember it well. Nowadays, it’s much quieter.

Billa, 32, chose not to leave. He found a job in his hometown and stayed with his family. But for a person his age, he’s unusual.

You can feel the emptiness everywhere you go, he told me. “Doctors all go abroad. The restaurants are always looking for bartenders or waiters. If you want a plumber, you can’t find one.” Billa’s car broke down recently. Luckily, he loves fixing things himself — because it’s difficult to find a mechanic.

Besmir Billa playing a traditional Albanian instrument, called the cifteli, in Kukes.

All the while, there is a parallel reality playing out far from home, one that the people of Kukes see in glimpses on TikTok and Instagram. Their feeds show them a highly curated view of what their lives might look like if they left this place: good jobs, plenty of money, shopping at designer stores and riding around London in fast cars. 

In Kukes, by comparison, times are tough. Salaries are low, prices are rising every week and there are frequent power outages. Many families can barely afford to heat their homes or pay their rent. For young people growing up in the town, it’s difficult to persuade them that there’s a future here.

Three days before I met Billa, a gaggle of teenage boys chased a convoy of flashy cars down the street. A Ferrari, an Audi and a Mercedes had pulled into town, revving their engines and honking triumphantly. The videos were uploaded to TikTok, where they were viewed and reposted tens of thousands of times.

Behind the wheel were TikTok stars Dijonis Biba and Aleks Vishaj, on a victory lap around the remote region. They’re local heroes: They left Albania for the U.K. years ago, became influencers with hundreds of thousands of followers, and now they’re back, equipped with cars, money and notoriety.

Vishaj, dubbed the “King of TikTok” by the British tabloids, was reportedly convicted of robbery in the U.K. and deported in 2021. Biba, a rapper, made headlines in the British right-wing press the same year for posting instructions to YouTube on how to enter the U.K. with false documents. Police then found him working in a secret cannabis house in Coventry. He was eventually sentenced to 15 months in prison. 

The pair now travel the world, uploading TikTok videos of their high-end lifestyle: jet skiing in Dubai, hanging out in high-rise hotels, driving their Ferrari with the needle touching 300 kilometers per hour (180 mph) through the tunnel outside Kukes. 

Billa’s nephews, who are seven and 11, were keen to meet him and get a selfie when they came to town, like every other kid in Kukes. 

“Young people are so affected by these models, and they’re addicted to social media. Emigrants come back for a holiday, just for a few days, and it’s really hard for us,” Billa said. 

Billa is worried about his nephews, who are being exposed to luxury lifestyle videos from the U.K., which go against the values that he’s trying to teach them. They haven’t yet said they want to leave the country, but he’s afraid that they might start talking about it one day. “They show me how they want a really expensive car, or tell me they want to be social media influencers. It’s really hard for me to know what to say to them,” he said.

Billa feels like he’s fighting against an algorithm, trying to show his nephews that the lifestyle that the videos promote isn’t real. “I’m very concerned about it. There’s this emphasis for kids and teenagers to get rich quickly by emigrating. It’s ruining society. It’s a source of misinformation because it’s not real life. It’s just an illusion, to get likes and attention.”

And he knows that the TikTok videos that his nephews watch every day aren’t representative of what life is really like in the U.K. “They don’t tell the darker story,” he said.

The Gjallica mountains rise up around Kukes, one of the poorest cities in Europe.

In 2022, the number of people leaving Albania for the U.K. ticked up dramatically, as well as the number of those seeking asylum, at around 16,000, more than triple the previous year. According to the Migration Observatory at the University of Oxford, one reason for the uptick in claims may be that Albanians who lack proper immigration status are more likely to be identified, leading them to claim asylum in order to delay being deported. But Albanians claiming asylum are also often victims of blood feuds — long-standing disputes between communities, often resulting in cycles of revenge — and viciously exploitative trafficking networks that threaten them and their families if they return to Albania.

By 2022, Albanian criminal gangs in Britain were in control of the country’s illegal marijuana-growing trade, taking over from Vietnamese gangs who had previously dominated the market. The U.K.’s lockdown — with its quiet streets and newly empty businesses and buildings — likely created the perfect conditions for setting up new cannabis farms all over the country. During lockdown, these gangs expanded production and needed an ever-growing labor force to tend the plants — growing them under high-wattage lamps, watering them and treating them with chemicals and fertilizers. So they started recruiting. 

Everyone in Kukes remembers it: The price of passage from Albania to the U.K. on a truck or small boat suddenly dropped when Covid-19 restrictions began to ease. Before the pandemic, smugglers typically charged 18,000 pounds (around $22,800) to take Albanians across the channel. But last year, posts started popping up on TikTok advertising knock-down prices to Britain starting at around 4,000 pounds (around $5,000). 

People in Kukes told me that even if they weren’t interested in being smuggled abroad, TikTok’s algorithm would feed them smuggling content — so while they were watching other unrelated videos, suddenly an anonymous post advertising cheap passage to the U.K. would appear on their “For You” feed.

TikTok became an important recruitment tool. Videos advertising “Black Friday sales” offered special discounts after Boris Johnson’s resignation, telling people to hurry before a new prime minister took office, or when the U.K. Home Office announced its policy to relocate migrants to Rwanda. People remember one post that even encouraged Albanians to come and pay their respects to Queen Elizabeth II when she died in September last year. There was a sense of urgency to the posts, motivating people to move to the U.K. while they still could, lest the opportunity slip away. 

The videos didn’t go into detail about what lay just beneath the surface. Criminal gangs offered to pay for people’s passage to Britain, on the condition they worked for them when they arrived. They were then typically forced to work on cannabis farms to pay off the money they owed, according to anti-human trafficking advocacy groups and the families that I met in Kukes. 

Elma Tushi, 17, in Kukes, Albania.

“I imagined my first steps in England to be so different,” said David, 33, who first left Albania for Britain in 2014 after years of struggling to find a steady job. He could barely support his son, then a toddler, or his mother, who was having health problems and couldn’t afford her medicine. He successfully made the trip across the channel by stowing away in a truck from northern France. 

He still remembers the frightened face of the Polish driver who discovered him hiding in the wheel well of the truck, having already reached the outskirts of London. David made his way into the city and slept rough for several weeks. “I looked at everyone walking by, sometimes recognizing Albanians in the crowd and asking them to buy me bread. I couldn’t believe what was happening to me.” 

He found himself half-hoping the police might catch him and send him home. “I was so desperate. But another part of me said to myself, ‘You went through all of these struggles, and now you’re going to give up?’”

David, who asked us to identify him with a pseudonym to protect his safety, found work in a car wash. He was paid 35 pounds (about $44) a day. “To me, it felt like a lot,” he said. “I concentrated on saving money every moment of the day, with every bite of food I took,” he told me, describing how he would live for three or four days on a tub of yogurt and a package of bread from the grocery chain Lidl, so that he could send money home to his family.

At the car wash, his boss told him to smile at the customers to earn tips. “That’s not something we’re used to in Albania,” he said. “I would give them the keys and try to smile, but it was like this fake, frozen, hard smile.”

Like David, many Albanians begin their lives in the U.K. by working in the shadow economy, often at car washes or construction sites where they’re paid in cash. While there, they can be targeted by criminal gangs with offers of more lucrative work in the drug trade. In recent years, gangs have funneled Albanian workers from the informal labor market into cannabis grow houses. 

David said he was careful to avoid the lure of gangsters. At the French border, someone recognized him as Albanian and approached, offering him a “lucky ticket” to England with free accommodation when he arrived. He knew what price he would have to pay — and ran. “You have to make deals with them and work for them,” he told me, “and then you get sucked into a criminal life forever.”

It’s a structure that traps people in a cycle of crime and debt: Once in the U.K., they have no documents and are at the mercy of their bosses, who threaten to report them to the police or turn them into the immigration authorities if they don’t do as they say. 

Gang leaders manipulate and intimidate their workers, said Anxhela Bruci, Albania coordinator at the anti-trafficking foundation Arise, who I met in Tirana, the Albanian capital. “They use deception, telling people, ‘You don’t have any documents, I’m going to report you to the police, I have evidence you have been working here.’ There’s that fear of going to prison and never seeing your family again.” 

Gangs, Bruci told me, will also make personal threats against the safety of their victims’ families. “They would say, ‘I’m going to kill your family. I’m going to kill your brother. I know where he lives.’ So you’re trapped, you’re not able to escape.”

She described how workers often aren’t allowed to leave the cannabis houses they’re working in, and are given no access to Wi-Fi or internet. Some are paid salaries of 600-800 pounds (about $760-$1,010) a month. Others, she added, are effectively bonded labor, working to pay back the money they owe for their passage to Britain. It’s a stark difference from the lavish lifestyles they were promised.

As for telling their friends and family back home about their situation, it’s all but impossible. “It becomes extremely dangerous to speak up,” said Bruci. Instead, once they do get online, they feel obliged to post a success story. “They want to be seen as brave. We still view the man as the savior of the family,” said Bruci, who is herself Albanian.

Bruci believes that some people posting on TikTok about their positive experience going to the U.K. could be “soldiers” for traffickers. “Some of them are also victims of modern slavery themselves and then they have to recruit people in order to get out of their own trafficking situation.”

As I was reporting this story, summer was just around the bend and open season for recruitment had begun. A quick search in Albanian on TikTok brought up a mass of new videos advertising crossings to the U.K. If you typed in “Angli” — Albanian for “England” — on TikTok the top three videos to appear all involved people making their way into the UK. One was a post advertising cheap crossings, and the other two were Albanians recording videos of their journeys across the channel. After we flagged this to TikTok, those particular posts were removed. New posts, however, still pop up every day.

With the British government laser-focused on small boat crossings, and drones buzzing over the beaches of northern France, traveling by truck was being promoted at a reduced price of 3,000 pounds (about $3,800). And a new luxury option was also on offer — speedboat crossings from Belgium to Britain that cost around 10,000 pounds (about $12,650) per person.

Kevin Morgan, TikTok’s head of trust and safety for Africa, Europe and the Middle East, said the company has a “zero tolerance approach to human smuggling and trafficking,” and permanently bans offending accounts. TikTok told me it had Albanian-speaking moderators working for the platform, but would not specify how many. 

In March, TikTok announced a new policy as part of this zero-tolerance approach. The company said it would automatically redirect users who searched for particular keywords and phrases to anti-trafficking sites. In June, the U.K.’s Border Force told the Times that they believed TikTok’s controls had helped lower the numbers of small boat crossings into Britain. Some videos used typos on purpose to get around TikTok’s controls. As recently as mid-August, a search on TikTok brought up a video with a menu of options to enter Britain — via truck, plane or dinghy.

In Kukes, residents follow British immigration policy with the same zeal as they do TikTok videos from Britain. They trade stories and anecdotes about their friends, brothers and husbands. Though their TikTok feeds rarely show the reality of life in London, some young people in Kukes know all is not as it seems.

“The conditions are very miserable, they don’t eat very well, they don’t wash their clothes, they don’t have much time to live their lives,” said Evis Zeneli, 26, as we scrolled through TikTok videos posted by her friends in the U.K., showing a constant stream of designer shopping trips to Gucci, Chanel and Louis Vuitton.

It’s the same for a 19-year-old woman I met whose former classmate left last year. Going by his social media posts, life looks great — all fast cars and piles of British banknotes. But during private conversations, they talk about how difficult his life really is. The videos don’t show it, she told me, but he is working in a cannabis grow house. 

“He’s not feeling very happy. Because he doesn’t have papers, he’s obliged to work in this illegal way. But he says life is still better over there than it is here,” she said.

 “It’s like the boys have gone extinct,” she added. At her local park, which used to be a hangout spot for teenagers, she only sees old people now.

Albiona Thaçi, 33, at home with her daughter.

“There’s this huge silence,” agreed Albiona Thaçi, 33, whose husband traveled to the U.K. nine months ago in a small boat. When he left, she brought her two daughters to the seaside to try to take their mind off of the terrifying journey that their father had undertaken. Traveling across the English Channel in a fragile dinghy, he dropped his phone in the water, and they didn’t hear from him for days. “Everything went black,” Thaçi said. Eventually, her husband called from the U.K., having arrived safely. But she still doesn’t know when she’ll see him again. 

In her 12-apartment building, all the men have left. “Now we have this very communal feeling. Before, we used to knock on each others’ doors. Now, we just walk in and out.” But Thaçi’s friends have noticed that when they get together for coffee in the mornings, she’s often checked out of their conversation. “My heart, my mind, is in England,” she said. She plans to join her husband if he can get papers for her and their daughters. 

The absence of men hangs over everything. In the village of Shishtavec, in the mountains above Kukes, five women crowded around the television one afternoon when I visited. It was spring, but it still felt like winter. They were streaming a YouTube video of dozens of men from their village, all doing a traditional dance at a wedding — in London. 

Adelie Molla and her aunt Resmije Molla watch television in Shishtavec.

“They’re doing the dance of men,” said Adelie Molla, 22. She had just come in from the cold, having collected water from the well up by the town mosque. The women told me that the weather had been mild this year. “The winter has gone to England,” laughed Molla’s mother Yaldeze, 53, whose son left for the U.K. seven months ago. Many people in their village have Bulgarian heritage, meaning they can apply for European passports and travel to Britain by plane, without needing to resort to small boats.

The whole family plans to eventually migrate to Britain and reunite. “For better or worse I have to follow my children,” said Yaldeze, who has lived in the village her whole life. She doesn’t speak a word of English. “I’m going to be like a bird in a cage.” 

Around the town, some buildings are falling into disrepair while others are half-finished, the empty window-frames covered in plastic sheeting. A few houses look brand new, but the windows are dark. Adelie explained that once people go to the U.K., they use the money they make there to build houses in their villages. The houses lie empty, except when the emigrants come to visit. And when they come back to visit their hometown, they drive so that they can show off cars with U.K. license plates — proof they’ve made it. 

 “This village is emptying out,” Molla said, describing the profound boredom that had overtaken her life. “Maybe after five years, no one will be here at all anymore. They’ll all be in London.”

The old city of Kukes was submerged beneath a reservoir when Albania’s communist regime built a hydropower dam in the 1970s.

The oldest settlements of Kukes date back to the fourth century. In the 1960s, when Albania’s communist government decided to build a hydropower dam, the residents of Kukes all had to leave their homes and relocate further up the mountain to build a new city, while the ancient city was flooded beneath an enormous reservoir. And in the early 1970s, under Enver Hoxha’s paranoid communist regime, an urban planner was tasked with building an underground version of Kukes, where 10,000 people could live in bunkers for six months in the event of an invasion. A vast network of tunnels still lies beneath the city today. 

“Really, there are three Kukeses,” one local man told me: the Kukes where we were walking around, the subterranean Kukes beneath our feet, and the Kukes underwater. But even the Kukes of today is a shadow of its former self, a town buried in the memories of the few residents who remain.

View of a street in Kukes, Albania.

David was deported from Britain in 2019 after police stopped him at a London train station. He tried to return to the U.K. in December 2022 by hiding in a truck but couldn’t get past the high-tech, high-security border in northern France. He is now back in Kukes, struggling to find work. 

He wanted me to know he was a patriotic person who, given the chance to have a good life, would live in Albania forever. But, he added, “You don’t understand how much I miss England. I talk in English, I sing in English, I cook English food, and I don’t want my soul to depart this earth without going one more time to England.”

He still watches social media reels of Albanians living in the U.K. “Some people get lucky and get rich. But when you see it on TikTok or Instagram, it might not even be real.” 

Besmir Billa, whose nephews worry him with their TikTok aspirations, has set himself a challenge. He showed me his own TikTok account, which he started last summer.

The grid is full of videos showcasing the beauty of Kukes: clips of his friends walking through velvety green mountains, picking flowers and petting wild horses. “I’m testing myself to see if TikTok can be used for a good thing,” he told me. 

“The idea I had is to express something valuable, not something silly. I think this is something people actually need,” he said. During the spring festival, a national holiday in Albania when the whole country pours onto the streets to celebrate the end of winter, he posted a video showing young people in the town giving flowers to older residents. 

At first, his nephews were “not impressed” by their uncle’s page. But then, the older boy clocked the total number of views on the spring festival video: 40,000 and counting. 

 

The post The Albanian town that TikTok emptied appeared first on Coda Story.

]]>
Senegal is stifling its democracy in the dark https://www.codastory.com/authoritarian-tech/senegal-is-stifling-its-democracy-in-the-dark/ Fri, 11 Aug 2023 13:37:50 +0000 https://www.codastory.com/?p=45724 By shutting down the internet and jailing the opposition, the Senegalese government turns to the authoritarian playbook to suppress protests

The post Senegal is stifling its democracy in the dark appeared first on Coda Story.

]]>
On July 31, after jailing opposition leader Ousmane Sonko and dissolving the political party that he leads, Senegal’s government ordered a nationwide mobile internet shutdown. The communications ministry said the shutdown was meant to curb “hateful messages.”

The authorities had made a similar decision in June after a Senegalese court handed Sonko a two-year prison sentence in absentia, a decision his supporters believed was a politically motivated attempt to prevent Sonko from running for president in 2024. At least 16 people died when Sonko’s supporters and Senegalese police clashed on the streets of the capital Dakar. The subsequent July protests left at least two people dead.

Last week, Sonko was hospitalized after going on a hunger strike to protest his arrest.  

“We fear the government,” Mohammed Diouf, a Dakar school teacher told me. “The government does not want the world to know what is happening in our country.” He said the internet shutdown left him unable to communicate with other protesters. “There is brutal oppression, and many young demonstrators have been killed and injured. The security forces use live fire, that is the situation,” said Diouf, who opted to use a pseudonym out of fear of reprisal.

On August 2, the day before Diouf and I spoke, the Senegalese government announced an indefinite ban on TikTok, the app that young people have been using to document violent encounters between demonstrators and the security apparatus.

Fueling public anger is a widely held fear that Senegalese President Macky Sall, currently serving his second term in office, may try to run for president again in 2024. In 2016, a public referendum on presidential term limits reset the period a president can stay in power to a maximum of two five-year terms. Sall, who had, at the time, begun serving his second term, argued that the constitutional amendment “reset the clock to zero,” making him eligible to run again. 

In an address to the nation after the June protests, Sall vowed he would not run for a third term. But experts say he is to blame for the ambiguity that has fueled unrest.

“This problem has to be put at the feet of Macky Sall. For a long time, he made the potential of him running for a third time ambiguous,” said Ibrahim Anoba, an African affairs analyst and a fellow at the Center for African Prosperity. “You can imagine what the populace will feel,” Anoba told me. “More so, if the president becomes intolerant of opposition leaders.” 

Current political anxieties have been compounded by the economic downturn resulting from the Covid-19 pandemic and the food shortages triggered by Russia’s war in Ukraine. Senegal’s poverty rate was 36.3% in 2022, according to the World Bank, and the economy has also been hampered by rising debt.. 

The future looked much brighter in 2014, when newly discovered oil reserves appeared to set the stage for Senegal to become a major oil producer. But this oil, too, is now a source of public anxiety: Senegalese citizens fear that Sall will cede these riches to European companies.

Protesters, galvanized by Sonko amid concerns that Sall might indeed pursue a third term,  worried that Sall, a geological engineer before he became president, wanted to preside over the anticipated oil boom. It tipped public discontent into violent unrest, particularly among the country’s youth, who decried massive corruption, the overbearing influence of France and the slowdown of the economy. 

“We are fighting that the country retains the sovereignty of its wealth and natural resources which the government wants to sell off to oil firms. And for that, we will go until the end because it is our future that is at stake,” Diouf, the Dakar school teacher, told me. It is to Sonko that voters like Diouf look to reform Senegal’s system.

Sonko’s PASTEF party started in 2014 as a fringe party composed of political newcomers. Sonko, a young former tax inspector had shot to national recognition when he became a whistleblower in 2016, exposing the use of offshore tax havens by foreign companies to avoid paying taxes in Senegal. He became a member of the national assembly in 2017 and ran for president in 2019, trailing third behind Sall and Idrissa Seck Rewmi.

His criticism of Sall and his larger-than-life internet presence have endeared Sonko to young voters. He rapidly became the main threat to the ruling party. And it is that threat, say Sonko’s supporters, that is driving the criminal charges Sonko now faces, including rape (for which he was acquitted), formenting insurrection, creating political unrest, terrorism and theft.

State measures to control protests led by Sonko supporters have been violent and draconian. The internet shutdowns also pose a threat to Senegal’s already floundering economy. In the first quarter of 2023, Senegal’s unemployment rate stood at 21.5%, and Net Blocks estimates that each day without access to mobile internet costs the country nearly $8 million.

Financial and cryptocurrency trades, as well as ride hailing and e-commerce businesses, are all seeing losses due to the network shutdowns. “With the restriction of the internet that is becoming recurrent these days, we no longer have the opportunity to sell or buy USDT,” said Mady Dia, referring to Tether, a cryptocurrency “stablecoin” pegged to the U.S. dollar. “That is an abysmal shortfall,” Dia, who works with a cryptocurrency exchange, told me.

Dia and Diouf both said they’d withdrawn money when the protests began, expecting that the banks would likely close and that financial services would be crippled were the authorities to impose an internet shutdown. 

The political situation, Dia said, and the internet shutdowns have left him contemplating options for leaving Senegal altogether. 

“Many young people are ready to abandon their country if Sall remains in power in 2024,” he told me. In the past decade, thousands of young Senegalese have sought to move to Europe in search of better fortunes, often on small boats. These perilous journeys have claimed hundreds of lives. Last month, at least 15 people drowned after a boat carrying migrants and refugees capsized off the coast of Dakar.

In a West Africa beset by political instability – the most recent example being the coup in Niger – Senegal has been cited as a model of democracy. That reputation is starting to wear off. 

“This is really bad for the region itself,” said Anoba, the analyst at the Center for African Prosperity. “As you know, Macky Sall is one of the leading figures in West Africa, and right now [as] we are trying to quench the fires of coups that are changing the political terrain, this is the last thing we want.”

Threats against Senegalese media represent another sign of democratic backsliding in the country. In June, a television channel offering live coverage of the protests was suspended for 30 days. And Papa Ale Niang, a journalist with the prominent daily newspaper Dakarmatin, was charged on August 1, like Sonko, with “inciting insurrection.”

Internet shutdowns are also a sign of faltering democratic values. “Cutting off the internet is tantamount to denying the right to information, which is a constitutional principle, not to mention international laws,” said Emmanuel Diokh, the Senegal lead at Internet Sans Frontières, an international organization that defends access to the internet. 

Since 2017, internet shutdowns have become an increasingly common tactic of information and social control in Africa. Cameroon’s long-serving president, Paul Biya, imposed an internet ban in the English-speaking region of the country in 2017 that lasted three months. In 2019, Zimbabwean President Emmerson Mnangagwa also imposed an internet shutdown in response to protests. Governments in Ethiopia, Eritrea and Equatorial Guinea have also imposed strict internet regulations in the past five years.

All of these countries have used the same rationale: The actions were intended to curb hate speech or to avoid the breakdown of order. Sall has shown one thing to the Senegalese people — the internet is not safe from government control. Instead of curbing hate speech, shutting down the internet is a sign that he is prepared to use any means necessary to decimate the opposition before the elections in February. Still, protesters like Diouf say they will not relent.

The post Senegal is stifling its democracy in the dark appeared first on Coda Story.

]]>
Migrants take the US to court over its glitchy asylum app https://www.codastory.com/authoritarian-tech/immigration-asylum-lawsuit-cbp-one/ Wed, 09 Aug 2023 13:43:02 +0000 https://www.codastory.com/?p=45696 The Biden administration’s glitchy new app is failing asylum seekers. Now, migrant’s rights groups are fighting back

The post Migrants take the US to court over its glitchy asylum app appeared first on Coda Story.

]]>
It has been more than half a century since U.S. immigration laws were written to enshrine the right to apply for asylum at any port of entry to the country. But a new lawsuit argues that today, the right to seek safe haven from persecution is only accessible to people who show up at America’s doorstep with a working smartphone in hand.

Since May, migrants on the Mexico side of the U.S.-Mexico border who are hoping to apply for asylum have been required to make their asylum appointments through a mobile phone app operated by U.S. Customs and Border Patrol, known as CBP One. The new system has effectively oriented the first — and for many, the most urgent — stage of the asylum process around a digital tool that is, by many accounts, glitchy and unreliable.

On July 27, immigrants’ rights groups filed a class action lawsuit against the Biden administration over its use of the app, setting the stage for a legal showdown over the government’s decision to shift the first stage of the asylum application process into the realm of automation.

The plaintiffs include 10 migrants who sought asylum along the border but were turned away by U.S. immigration officials because they hadn’t made appointments using CBP One. Their suit alleges that the U.S. government’s use of CBP One has created steep, and in some cases insurmountable, technological obstacles that have prevented migrants from pursuing their right to asylum. As a result, they’re often left with little choice but to remain in Mexican border towns, where violence and crime targeting migrants is notoriously high. 

CBP One became the primary entry point into America’s asylum system after the Biden administration lifted Title 42, a Trump-era policy that barred most people from seeking asylum in the U.S. because of the Covid-19 pandemic. Now, in order to be eligible for protection, migrants must possess an up-to-date smartphone, internet access, mobile data and the ability to read and write in English, Spanish or Haitian Creole — the only three languages the app offers. These requirements, the lawsuit argues, disadvantage refugees who don’t have or can’t afford a smartphone and those who lack the requisite language skills. The suit also argues that the government has established new criteria for asylum applications that do not align with asylum laws that were vetted and approved long before the dawn of the smartphone. Imagine telling the authors of the modern asylum system, which was created after the Holocaust, that this guarantee is only accessible to people who arrive at the border with a miniature computer in their pocket.

And that’s nothing to say of the technology’s myriad flaws. As I reported in June, the app is notoriously unreliable, with facial recognition software that misidentifies darker skin tones and has a tendency to crash, freeze and log users out while they are trying to schedule their asylum appointments. 

“If I could give negative stars I would,” a user seethed on CBP One’s App Store review page, where the app has just 2.6 stars. “My family are trying to flee violence in their country and this app and the photo section are all that’s standing in the way.”

Critics have been sounding the alarm about these problems since the Biden administration announced the policy. Amnesty International argued that the government’s use of the app violates international human rights law by placing unnecessary technical and practical barriers in the way of migrants seeking to exercise their legal right to apply for asylum.

Immigration attorney Nicole Ramos spoke with me about the technical and linguistic challenges that asylum seekers encounter when they attempt to schedule an appointment on the app. Ramos is the Border Rights Project director for the immigrant’s rights group Al Otro Lado, which provides legal support to asylum seekers on both sides of the US-Mexico border.

“There are days where the app is unable to be used due to system-wide glitches,” she said. “There are days and weeks where people keep getting an error message that says that they need to be closer to the border in order to make an appointment and they are literally standing at the port of entry.” 

Asylum seekers who don’t speak or read in English, Spanish or Haitian Creole are left to try to make sense of the error messages and the app’s directions on their own. The government does not provide translation support to people who do not speak a language supported by CBP One. 

“The government is putting all the onus for language access on the asylum seeker themselves and already overburdened nonprofit organizations,” Ramos said. She explained that Al Otro Lado hires interpreters to help applicants who don’t speak any of the languages that the app offers but noted that this responsibility should fall on the government, not organizations like hers. “They are externalizing their responsibility to afford language access to individuals trying to access our legal system.” 

The government’s policy grants exceptions for asylum seekers with “exceptionally compelling circumstances,” like acute medical emergencies or risk of death, and says that those individuals should be permitted to ask for border officials for asylum without a CBP One appointment. But in practice, the plaintiffs say, the app has effectively become the only pathway to access asylum, even for people who are eligible for the government’s exceptions. Ramos said Al Otro Lado has seen border officials turn away asylum seekers without appointments who were in the middle of medical emergencies, including a man in the middle of an epileptic seizure at the port of entry. “The Red Cross was called, police were there and they were aware of the situation and they still refused to process him,” she said. Ramos also shared the story of an asylum seeker who was killed in Mexico while waiting for a CBP One appointment. When the victim’s surviving family members approached border officials with the person’s death certificate in hand and asked to apply for asylum without using the app, they were instructed to schedule an appointment on CBP One. 

The lawsuit alleges that border officials are “almost uniformly requiring asylum seekers to have a CBP One appointment in order to be inspected and processed, regardless of whether they may be eligible for an exception.” It describes two separate instances in which immigration officials rejected asylum seekers’ requests for special consideration after they were kidnapped by criminal groups in Mexico and missed their scheduled appointments. One of the victims escaped but left behind all of his valuables, including his cell phone, according to the lawsuit. When he appeared at the port and asked for asylum, border officials “emphasized that he needed to sign up through the app and denied him any opportunity to explain the exigencies of his situation.”

The post Migrants take the US to court over its glitchy asylum app appeared first on Coda Story.

]]>
Inside New Mexico’s struggle to protect kids from abuse https://www.codastory.com/authoritarian-tech/new-mexico-child-welfare/ Thu, 27 Jul 2023 14:44:18 +0000 https://www.codastory.com/?p=44250 A safety scoring tool was supposed to improve child welfare. But former caseworkers say it’s not helping

The post Inside New Mexico’s struggle to protect kids from abuse appeared first on Coda Story.

]]>
Ivy Woodward can turn her emotions off like a water faucet. 

It served her well when she worked in child protective services in Hobbs, a small oil town in southeastern New Mexico.

She looks at it this way: “If you give in to emotion, the job’s not going to get done. You don’t process emotion. You walk in on a scene, and the first thing you see isn’t a tragedy. The first thing you see is a checklist of things you need to do to resolve the issue.”

But when Woodward looks back on all the horrible things that she witnessed as a caseworker, the weight of the decisions she had to make is almost too much to bear.

“Each decision that you make changes your life. Every single, solitary decision that I made, I still carry it,” Woodward said when we met in the spring.

Woodward used to work for the state of New Mexico’s Children, Youth, and Families Department, supporting children who had been the victims of abuse or neglect. She was part of the CYFD staff teams that deliberated on whether to take kids away from their parents and put them into foster care.

Woodward is still haunted by the memories of one little girl in particular. Woodward had reason to believe that the girl, who was living in foster care with her grandfather, was being abused.

But something stood in the way: It was a safety assessment tool that the state requires caseworkers like Woodward to use. Formally known by its somewhat clunky brand name — “Structured Decision Making” — the tool is meant to help determine whether a child is in great enough danger to be removed from their home. 

Her concern was based on more than just a hunch. The girl’s mother had told Woodward that the grandfather was an abuser – he had raped her when she was young. Woodward took this information to her team and called for another office to send a caseworker to investigate. But that caseworker’s report, based on the tool, indicated that there was no reason for concern about the girl’s safety. Despite Woodward’s pleas, CYFD staff decided to keep the girl with her grandfather.

It became clear months later that Woodward was right — the little girl’s grandfather had been sexually abusing her all along. The girl was eventually taken away from her grandfather and placed in a different foster home.

The agency is adamant that the tool isn’t meant to supersede a caseworker’s judgment. “It’s not about giving the job of a caseworker to an electronic tool,” said Sarah Meadows, the head of the agency’s research, assessment and data bureau. “That’s 100% not what it’s intended to do.”

But in cases like this one, it felt to Woodward as if the tool had won out.

“You can no longer go on all of your training, all of your experience in the field. It’s a moot point because the tool said so,” Woodward told me.

“You’re going off of a scoring system now. And if the family doesn’t meet the score, you have to turn around and walk out.”

Across the U.S., child welfare agencies are looking to algorithms and risk assessment tools to help support the arduous labor of caseworkers in child protective services agencies. The hope is that these tools will help caseworkers make better and more equitable decisions that will ultimately improve outcomes for vulnerable children. But these agencies’ problems run deep. Oftentimes, there is no single tool or policy solution that can fix them.

Facing high rates of child abuse and neglect, the New Mexico Children, Youth, and Families Department rolled out the Structured Decision Making safety scoring tool in 2020. The goal was to help the agency decide whether or not children are safe living with their parents and to assess the risk of future abuse if a child remains in their home. But in the face of severe staffing shortages and a push to remove kids from their families in only the most extreme cases, former CYFD staff and children’s attorneys in New Mexico say that the safety scoring tool has been replacing caseworker judgment and leaving some kids in harm’s way.

New Mexico had the 15th highest rate of child abuse or neglect in the 2021 fiscal year, a drop from the 8th highest in 2020. About a third of children who died from abuse, neglect or homicide between 2015 and 2021 had prior involvement with child welfare, according to the New Mexico Department of Health.

One of them was named James Dunklee Cruz. There were countless warning signs that the little boy was at risk of harm. When he was just a few months old, caseworkers found ample evidence of neglect: The home where he lived with his mother was roach-infested and strewn with trash and dog feces. 

In October 2019, when Dunklee Cruz was four, he was brought to the hospital with multiple injuries, including a black eye, a bruised penis and an injured shoulder. He told a CYFD investigator that he had been abused by three people in his life. But somehow those allegations were declared unsubstantiated. The Strategic Decision Making tool classified Dunklee Cruz as “safe with a plan,” allowing him to stay with his mother.

Two months later, James Dunklee Cruz died as a result of blunt force trauma to his head and torso at the hands of Zerrick Marquez, one of the men he had named as his abuser two months before. Marquez pleaded guilty to killing Dunklee Cruz and was sentenced to 30 years in prison.

CYFD conducted nine investigations into allegations of abuse and neglect during the boy’s short life. Caseworkers put what they call “safety plans” in place for Dunklee Cruz, but this wasn’t enough to keep him safe. These details appear in a publicly posted child fatality review summary report. The section of the document drawing on the child’s autopsy also describes a litany of injuries, including “healing jaw fractures and healing subdural hemorrhage indicating significant blunt head trauma that occurred earlier than the acute injuries” — in other words, injuries that didn’t kill him but proved that Dunklee Cruz was at risk of serious harm before his final days.

A wrongful death lawsuit is also working its way through the court. A complaint filed in December 2022 in a federal district court in New Mexico accuses CYFD of failing in their duty to protect the boy and states that Dunklee Cruz’s mother repeatedly violated the safety plans CYFD put in place. The complaint also specifically points to the Structured Decision Making tool. 

It reads: “Over the span of his four years of life, CYFD investigators repeatedly failed to rely on accurate and well-documented facts when it utilized the agency’s Safety Risk Assessment Tool, causing its repeated contacts with James to result in flawed and underestimated risk assessment and flawed decision-making resulting in James’ death.”

When I asked about the boy’s case, CYFD offered this response: “The death of James Dunklee Cruz is tragic. The loss of any child is felt deeply and grieved by our caseworkers and staff. Regarding the function of the tool in this case, he was identified as safe with a plan. This means that the safety assessment tool identified at least one danger indicator and that in order for the child to stay in the custody of the parent, a plan was required. Our caseworkers worked with James’ mother to find a safe place for her to live and alternative childcare for James to mitigate against the threats that were identified by the caseworker.”

The Juvenile Justice Center which houses the Bernalillo County Youth Services Center Children’s Court in Albuquerque, New Mexico.

What happened to James Dunklee Cruz reflects the most significant problem that former CYFD workers raised when they talked to me about the Structured Decision Making safety tool: It doesn’t always convey how much danger kids are truly facing.

The tool’s launch coincided with a change in the agency’s approach to decision-making about when to remove a child from their family’s home. This was a part of a nationwide shift with the passage of the federal Family First Preservation Services Act, a policy that was designed to keep children “safely with their families to avoid the trauma that results when children are placed in out-of-home care.”

The safety tool doesn’t tell caseworkers what to do. It is meant to facilitate a conversation between the worker and their supervisor about whether to declare the child “safe,” “safe with plan” or “unsafe.” The tool sets the tone for what, ideally, should be an extensive, in-depth dialogue between people from across the agency. But due to staffing shortages, it doesn’t always play out this way. The tool “doesn’t take into account that there’s not enough workers, there’s not enough supervisors,” said Matt Esquibel, a regional manager at CYFD.

Some former caseworkers have told me that, in this context, the assessment takes on an outsized role in determining a child’s fate.

“It’s not meant to streamline or fast-track decisions, but it helps focus the conversations, which is helpful to supervisors and to workers,” said Meadows, the head of the agency’s research, assessment and data bureau.

Former CYFD workers told me that the risk and safety assessments did not always match what they observed about the level of danger a child was facing, particularly when it came to substance abuse, domestic violence or repeated involvement with child protective services.

“We saw issues with the safety tool immediately,” said one former CYFD worker who had knowledge of the tool and reviewed investigations in which it was used. She requested anonymity out of fear of retaliation. 

The former CYFD worker said she would see cases in which she thought a child should have been removed from the home but the safety tool didn’t reflect that.

“I’m reading a report that comes and I’m reading their notes that they’ve entered. And then I’m looking at their safety assessment, [and it] does not match what I’m reading,” she said.

Workers are only allowed to check off a danger on the safety tool if they can observe or otherwise prove it. But investigators don’t always have time to do multiple home visits or to gather more information, said Esquibel. They may not be able to gather all the details right away, and children may not initially disclose abuse. There is an “override” for the risk assessment that requires supervisor approval. If the worker thinks that the risk score is too low, they can bump the score up one level. CYFD’s Meadows emphasized that workers should use their judgment and critical thinking, work with supervisors and override the tool if necessary.

“I think the workers and supervisors do the best that they can when they’re out there,” Esquibel said. “But your assessment is only as good as what information you’re gathering or who’s available at the time.” 

Ultimately, the former CYFD staff member who requested anonymity thinks the assessments are not capturing the seriousness of some cases and that the consequences for kids are real.

“I think it’s leading to dangerous situations for children,” she said. “I think the agency is leaving the children in situations based on that tool when they should be removing them.”

Meadows said that shouldn’t happen. “If a worker feels strongly that a child is unsafe and they don’t want to walk away from that child in that home, they shouldn’t. Safety tool be damned,” she said.

Ivy Woodward at her home in West Texas.

Even though she’s moved on from CYFD, this all still weighs heavily on Ivy Woodward, who has worked with children for most of her career. Before working in child welfare, Woodward, who is Native American and Hispanic, taught elementary school on the Apache reservation in San Carlos, Arizona and in southwestern New Mexico. In 2017, CYFD brought her on as a permanency planning supervisor. This meant she worked on cases where the agency had credible evidence that a child was being abused or neglected at home. The work spoke to core elements of Woodward’s personality. 

“I’m a protector,” Woodward told me. “You can do a lot to me and get away with it. But if I see somebody doing something to someone else, that triggers my inner anger.” 

She calls it like she sees it and pushes back when she disagrees. “I don’t know what is broken in my head, but I question everything,” Woodward said.

When Woodward left CYFD in the summer of 2020, she and a colleague filed a lawsuit alleging that they faced retaliation after raising concerns about a case in which a child was severely injured after she and her siblings were returned to their parents. The agency settled the lawsuit for more than $300,000 without acknowledging liability.

Woodward has a fast, forceful way of speaking, a reflection of her often overly-caffeinated state. But when she talks about the kids she worked with at CYFD and the horrible things she heard or saw on the job, her voice gets a little higher. Her emotions begin to flow.

“You do have to be able to turn off the emotions and make those cold, hard decisions when the time comes to make them,” she said. “But until that time comes, you have to see people, not casework.”

Woodward now lives across the border in a tiny county in West Texas with her husband and two daughters. She works as the chief of juvenile probation and coordinates emergency management for the county.

In some ways, Woodward was an outlier among other CYFD staff. Many start working with the agency soon after college and have little or no experience in child welfare. The agency struggles to stay fully staffed — this spring, nearly a quarter of positions were unfilled. A July 2022 review by an outside consultancy found that CYFD employees felt overwhelmed by the work they were being asked to do. Staff said they would rush from one emergency to the next and had little ability to make progress on other cases.

This is not unique to New Mexico. A caseworker I spoke with in Indiana described feeling like he was stretched so thin that he would race from one emergency to the next without ever having time to put out the fires. He felt like he was just identifying a fire and then moving on to the next one.

The issues of understaffing and high turnover rates were top of mind for many CYFD workers. High turnover isn’t just bad for morale. It directly affects the ability of those who remain on staff to do their work. When one person leaves, those who stay have to absorb their caseload. It is daunting. The review described a “culture of fear” in which staff were afraid that if something bad were to happen with a case, they would be punished or “scapegoated.”

And the forced intimacy of the work can be grating and even traumatic. Caseworkers must regularly intervene in painful moments of struggle and conflict within families, and they are sometimes met with resistance. As agents of the state, they are caught between a bureaucracy that requires them to treat each situation as consistently and objectively as possible and real life-and-death conflicts in which people’s actions are largely driven by emotion.

The Children, Youth, and Families Department offices in Albuquerque, New Mexico.

For strained child welfare agencies, algorithms and risk assessment tools are an attractive solution to the vexing challenge of maintaining consistent decision-making practices.

Some states have experimented with predictive analytics, with limited success. Illinois used an algorithm to estimate the likelihood that a child would die or be seriously injured as a result of abuse or neglect. Social workers were flooded with cases erroneously determined to be urgent, while children that the algorithm deemed low-risk were dying. The state soon stopped using the tool after the Illinois Department of Children and Family Services declared it ineffective. 

A child welfare algorithm in Allegheny County, Pennsylvania is currently facing scrutiny from the U.S. Department of Justice. Using arrest records, Medicaid data and documented struggles with substance abuse, the algorithm generates a score from 1 to 20 that determines whether to open a neglect investigation. Reporting by the AP found that the algorithm disproportionately flagged Black children for neglect investigations. There was also evidence that the algorithm did the same for parents with disabilities.

Torn between pushback against opaque algorithms and the desire to use technology to streamline decision-making, some states are turning to scoring tools that are less opaque and less automated. New Mexico’s Structured Decision Making tool, created by the nonprofit Evident Change, is one of them. Oregon, New Hampshire and California also use assessment tools built by Evident Change.

Structured Decision Making offers a checklist that is meant to help the investigator understand “the risk of imminent and serious harm,” according to a CYFD progress and impact report. Children are ranked safe, safe with plan, which involves in-home services, or unsafe, which is grounds for removal. There’s also an actuarial risk scoring tool, which is meant to assess “the likelihood of any future maltreatment” and additional CYFD involvement in the next 18 to 24 months, if the child remains with the family.

The safety scoring tool asks about abuse or neglect, including physical or sexual abuse, failure to meet the child’s basic needs, unsafe living conditions, emotional harm or unexplained injuries. Both assessments are intended to guide caseworkers to think about risk factors, vulnerabilities and the impact on the child.

“What Structured Decision Making tries to do is to help workers and supervisors make accurate, consistent and equitable decisions at these high-stakes moments,” said Phil Decter, the director of child welfare at Evident Change.

Structured Decision Making is also “intended to reduce bias, whether that’s related to race, ethnicity, socioeconomic status, making sure that we’re not conflating poverty with neglect,” said CYFD’s Meadows.

But in New Mexico, as the Dunklee Cruz case and insights from caseworkers make clear, the tool does not always work as intended. And the tool can’t solve some of CYFD’s biggest problems. The agency doesn’t have the workers to meet the needs of the population. Emblematic of a national trend, CYFD is chronically understaffed. Workers juggle heavy caseloads and often have precious little time to dedicate to each child’s case.

The safety tool isn’t meant to fix that. CYFD says hiring is a priority. “Structured Decision Making is not intended to replace human beings in terms of lightening their caseload,” Meadows said. “The role of it is to create consistency, making sure that we’re looking at every angle of the case, every potential impact to a child.”

But for caseworkers racing from one emergency to the next, the tool begins to play a different role. It sometimes becomes a shortcut, they told me — a stand-in for real human decision-making, in a system already weighed down by the rigid requirements of the state.

Reed Ridens at his home in New Mexico.

Reed Ridens remembers everything about the day the state took him away from his father almost seven years ago. It was a typical January afternoon at school. About an hour before classes ended, Ridens, who was 15 at the time, was pulled out of orchestra practice and brought to a conference room. Waiting for him were two of his teachers, the school social worker, representatives from CYFD, a police officer and his dad. 

“I’m just looking around like, what is going on?” Ridens recalled. 

For nearly an hour, the adults in the room went back and forth about whether Ridens’ dad could take care of him. There were concerns, they said, about neglect and his father’s alcoholism. 

“The entire time, I was just sitting there, crying, like, ‘Hey, please don’t take me out of my home,’” Ridens said.  

His protests were futile. Ridens stayed in the foster care system until he was 18, moving between 15 different placements. It left him with a deep-seated trauma, compounded by his father’s death four years ago.

“I felt like the state was taking me out of my household and then not doing any better for me than my father did. And in fact, actually putting me in worse-off situations,” he said. 

“I don’t really feel like they saw me as a person,” he told me when we met in Albuquerque.

“I feel like they didn’t see me as more than a list of checkmarks. I feel like they didn’t see my dad as anything more than a monster.”

Today, kids in a position like Ridens’ are not only dealing with adults trying to decide what’s best for them. Their fate is also shaped by tools like Structured Decision Making.

Ridens stayed in the foster care system until he was 18.

How did New Mexico get here? In part, the objective was to prevent the wrong kids from entering foster care, said Beth Gillia, the former deputy secretary of CYFD. 

“Foster care really should be the absolute last resort in extreme circumstances where needs cannot be met in the home and where a child cannot be safe at home,” she told me.

The state paid the nonprofit organization Evident Change $1.3 million to develop a risk and safety assessment tool, according to a state legislative finance committee report. The nonprofit creates similar tools for criminal justice, education and adult protective services. 

After a pilot in some counties in 2019, including in the country where James Dunklee Cruz lived, Structured Decision Making was rolled out statewide in January 2020.

The tool works best in situations where there is plenty of time and staff capacity to dedicate to this kind of deliberation. But CYFD’s investigations unit was short almost 25% of its workforce as of May 2023, according to the state’s public statistics dashboard, and maintains a steep turnover rate.

“If a child welfare organization is not being resourced well, if it’s understaffed or if caseloads are high, it’s going to be hard for optimal work to happen in any situation,” said Decter, who previously worked in child welfare in Massachusetts. “Good decision-making takes time.”

A report presented to a CYFD steering committee found that, according to focus groups made of CYFD workers, Structured Decision Making is “not being used as it was designed to be utilized. They go out and do their investigation and then come back in and click whatever needs to be clicked to show it has been done.” 

A former investigator in Hobbs told me that the Structured Decision Making tool just added more work to her plate. 

“It didn’t take a whole lot of time, but it was just another tedious step that you’re going through when you’ve already made up your mind,” she said.

As a result, she said, some people rushed through checking boxes on the safety tool. 

“I watched people go click, click, click, click, click, and just move on,” she said. It wasn’t the deciding factor. But she did feel like it could be “manipulated” to justify a certain decision.

CYFD says this isn’t how it’s supposed to be used. “Safety assessment is not a quick activity,” said Meadows. “Workers should take their time with it, really do their best to engage the family to get as much information as possible so that the safety assessment is accurate.”

Ivy Woodward, the former supervisor in Hobbs, had concerns about the safety scoring tool from the very beginning. In particular, she worried about how it dealt with a caregiver’s substance use, which is not listed as one of the danger indicators that must be checked in order for the agency to remove a child. In a sharp pivot from New Mexico’s previous assessment, substance use is treated as a “complicating factor” rather than a deciding one.

The risk tool adds points if the parent struggles with substance abuse. However, the tool doesn’t weigh substances differently. Meth gets the same number of points as marijuana, for example. 

In the Structured Decision Making training, Woodward and some of the other experienced caseworkers challenged this, fearing that it would put children at risk. The discussion got so heated that the head of the agency came to intervene. Woodward said she was effectively shut down. It was clear that the agency would be using the tool, whether she liked it or not.

Other CYFD workers and child welfare attorneys also raised concerns about how the safety and risk assessments handle drug abuse, a factor affecting almost one-third of children who were victims of maltreatment in 2020, according to statistics from the U.S. Department of Health and Human Services. 

While investigators are supposed to consider substance use in their decision about removing a child, it’s not supposed to be the sole reason for removal. This is part of a recent change in the agency’s approach to substance use. Caseworkers are now told to focus not solely on substance use, but rather on the impact substance use has on the caregiver’s ability to care for their children, said Gillia, the former deputy secretary of CYFD. 

“It’s only if the substance use interferes with parenting that it becomes abuse or neglect,” Gillia said. “So I think what the tool is trying to do is force a look at what parenting behavior is impacted by the substance use.” 

Phil Decter at Evident Change says the safety tool also helps when it comes to an inexperienced workforce. It has detailed instructions that help workers decide whether to check ‘yes’ or ‘no’ for a danger indicator. It points staff without a background in child welfare in the direction of things to look for, he said. 

But out in the field, Woodward sees problems with this. The decisions are so monumental — literally life or death. For Woodward, the tool is not a substitute for a seasoned supervisor guiding less experienced staff through decisions.

“It becomes a crutch for a lack of confidence,” said Woodward. “I don’t think that being armed with a piece of paper and a laptop is an adequate replacement for someone who’s been in the trenches for 20 years and can tell them this is what you do.”

And the tool doesn’t capture the unspoken cues that an investigator may notice, like a child who can’t make eye contact with a family member or won’t answer open-ended questions, Woodward said. 

The safety tool has an “other” option where investigators can write in safety concerns not addressed by the nine danger indicators. But that should be used “rarely and infrequently,” said Decter. “That’s by design. The other danger indicators should be sufficient.”

The success of the tool depends on how it’s used, and this is where Woodward hit roadblocks in Hobbs. She said her supervisors would tell her she was paying attention to things that the safety tool said weren’t an issue, rather than focusing on what she was called upon to investigate. Woodward felt like she was being instructed to ignore history, context and other dangers that she knew were significant from past experience. 

Information about those more subtle cues may be presented to a judge if CYFD files a petition to remove a child. But if the tool indicates that the child doesn’t need to be removed, the case likely won’t reach that stage.

Former CYFD staff like Gillia emphasized that the agency wants to keep kids living with their families unless they are clearly at risk of imminent and grave harm. The agency settled a lawsuit in 2020 that accused the state of failing to take adequate care of foster children in CYFD custody.

But former caseworkers I spoke with worried that the tool was being used as a way to all but ensure that kids would remain in the home, even in cases where it might leave them at risk. The worry, for people like Ivy Woodward, was that the tool was being used to justify decisions that had already been made.

Evident Change emphasizes that “tools don’t make decisions, people make decisions.” But former CYFD workers told me they worried that this particular tool has an outsized impact on the agency’s final decision.

CYFD commissioned a report from an outside group, Collaborative Safety, to look at what went wrong in five specific cases from 2021 in which children died. In the report, released in July 2022, staff involved in those cases said that sometimes the Structured Decision Making tool would say the child is “safe,” even if the worker felt there were “significant concerns with the family.”

“This places staff in the position where they perceive they cannot act on those concerns as it would go against what the tool’s output is,” wrote the report’s authors.

“Investigators were just using the tool as the end-all-be-all to a decision and an assessment. That’s not correct. We don’t want it to substitute their good judgment,” former CYFD Secretary Barbara Vigil told members of the New Mexico House Appropriations and Finance Committee in February 2023. In response to the Collaborative Safety report, CYFD announced they would overhaul their training protocols and pledged to “make sure that every member of staff uniformly knows how to use the tool, including through enhanced training to investigators and supervisors statewide.”

The former CYFD worker I spoke to who requested anonymity saw this reflected in the investigations she reviewed. “I don’t even know how many cases I reviewed where it’s like, you should have removed that kid immediately. And they didn’t because of the safety tool,” she said. “We would always say, use your common sense. This is a guide.” But some workers and managers still put too much emphasis on the tool.

Esquibel said the tool played a major role in facilitating decision-making. “The weight is 100% on your safety assessment because that’s really the snapshot of what happened the day that that worker was there,” he said.

CYFD’s Meadows put it differently: “It’s not just a snapshot in time,” she said. “Safety assessments are not a one-off, one and done thing. Safety is assessed on an ongoing basis when we have an open case because sometimes it does take effort and time to learn more about a family or child situation.”

Woodward doesn’t think the tool should carry so much weight. Instead, it should be “something in your toolbox that you utilize to help you through the process,” she said. “I don’t think they should be used as the ultimate decision maker.”

Vanna at her home in New Mexico.

When Vanna was first removed from her parents at age five, the adults in her life told her that her parents were going on vacation. 

She remembers a woman pulling up to their house and talking to her parents. Her mother was crying, her father was trying to calm her down. The strange woman went up to her younger brother, who was four at the time, and said, “How would you like to go somewhere else?”

“I looked at her, I said, ‘You’re not taking my brother,” said Vanna. 

Vanna, who is now 21 and using a nickname to preserve her privacy, has been fiercely protective of her little brother since they were small. 

As the woman stood talking to her parents, Vanna tried to get him out of his car seat. “And I tried to run with him, and she started running after us. And she said, ‘I’m not trying to take your brother. I’m trying to take you both. You’re going to this lady. Your parents are going on vacation.” 

She didn’t realize until later that she wasn’t returning home. Vanna spent 13 years in foster care until she aged out at 18. She estimates she lived in more than 50 placements. 

In foster care, Vanna felt like she was treated like a case number. Someone else made decisions about every aspect of her life. Someone else had power over her.

“I got numb. I became this robot. You want me to be a puppet, guys? I’ll be a puppet. Pull my strings and do whatever you want because that’s how you treat me,” she said. 

Vanna would tell the adults around her what she wanted, but she didn’t feel like they listened.

“They would always say, ‘Honey, we wouldn’t make any decision if it wasn’t going to be safe for you or if we weren’t keeping your best interests in mind,’” she said. “How do you know what my best interest is?”

The safety assessment that’s currently in place rolled out statewide the year Vanna aged out of the system. But when she looks back at her own experience, systems like this still worry her. She thinks the assessments used to make decisions need to be more personalized, otherwise they do more harm than good. 

“How do you put everyone in the same box, the same population? You put them under the same microscope, but they’re not the same. They’ve had individual situations,” she said.

If the assessments are too generalized, kids won’t end up getting the help they need, Vanna said. Just as the assessments used to evaluate their needs are flattened and standardized, the care kids get is too.

Vanna spent 13 years in foster care.

For people like Vanna, many aspects of the child welfare system were dehumanizing. Ernie Holland, who worked at CYFD for 25 years, thinks that by relying on assessment tools like Strategic Decision Making, the agency could make these effects even worse. When he left, he ran the Guidance Center, a nonprofit that offers mental health and other community-based services in Hobbs. 

Even as a young child protective services investigator, the weight of the decisions he was making never escaped him. He shares Ivy Woodward’s belief that “each decision you make changes your life.”

“Unless you’ve gone around the block three or four times to screw up your courage to knock on somebody’s door and ask them why they sodomized their infant, you don’t know what it’s like,” he said. “I’ve been there, done that, and I know what it’s like. And I know you’re risking some of yourself doing that work.”

That pressure never goes away. Holland still remembers a family whose case he managed nearly 50 years ago. He’s still not sure he made the right decision. 

As the agency relies more on standardized assessments, he worries humanity gets removed from the equation. 

For Holland, there’s a big difference between being able to say, “I made the decision based on this tool” and “I made the decision.”

“If you can hide behind an assessment tool,” Holland told me, “it’s not personal anymore. If you get it to where it’s not a personal decision, the kid loses. If you’re making life and death decisions, you damn well better own ‘em.”

This project was supported by the Global Reporting Centre and The Citizens through the Tiny Foundation Fellowships for Investigative Journalism.

The post Inside New Mexico’s struggle to protect kids from abuse appeared first on Coda Story.

]]>
When Meta suspends influential political accounts, who loses? https://www.codastory.com/authoritarian-tech/meta-oversight-board-cambodia-prime-minister/ Wed, 26 Jul 2023 13:05:43 +0000 https://www.codastory.com/?p=45457 Meta must decide whether to suspend Hun Sen’s Facebook page and the archive of recent Cambodian political history it contains

The post When Meta suspends influential political accounts, who loses? appeared first on Coda Story.

]]>
In January 2023, Cambodian Prime Minister Hun Sen live-streamed a speech on Facebook in which he threatened his opponents, vowing to send “gangsters” to their homes and to rally ruling party members “to protest and beat [them] up.”

The speech came back to haunt him on June 29, when Meta’s Oversight Board recommended that the company suspend the prime minister for six months for breaking the platform’s rules against threatening or inciting violence.

Later that day, Hun Sen beat the company to the punch and deleted his own page. It was a stunning move in Cambodia, where the prime minister has used the platform to trumpet his policy positions and lash out at his opponents to the nearly 14 million followers he has amassed since joining Facebook in 2015.

Some of his posts have had immediate real-world consequences. In February 2023, the forced closure of one of Cambodia’s last independent news outlets, Voice of Democracy, played out entirely on Hun Sen’s Facebook over two days. Angered by an article he claimed was erroneous, Hun Sen threatened in a post to revoke VOD’s license if the outlet didn’t apologize promptly.

After VOD expressed “regret” for any confusion the story caused, Hun Sen responded via Facebook that the statement was insufficient and said that the Cambodian Ministry of Information would revoke the outlet’s license.

“Is it acceptable to use words of ‘regret’ and ‘forgiveness’ instead of the word ‘apologize?’ For me, I cannot accept it,” Hun Sen wrote in the post. “Look for jobs elsewhere,” he added. Police and ministry officials arrived at VOD’s office the next morning with an order to cease publishing.

But now the future of Hun Sen’s page is uncertain. A few weeks after he deleted his account, his assistant reinstated it, ahead of the national elections. And Meta, which owns Facebook, has yet to officially decide whether to follow the recommendation of its Oversight Board and proceed with the six-month suspension. This means that the account could go offline again — and take with it a digital archive attesting to the more recent chapters of Hun Sen’s 38-year regime.

“Facebook was the key, important way for him to communicate his political messages to his audience and fans,” said Sokphea Young, a Cambodian research fellow at University College London who has studied the visual messaging of Hun Sen’s Facebook page. “However many people don’t like Prime Minister Hun Sen, the account is very important for the collective memory of Cambodian people and Cambodian history.”

And Cambodia is hardly alone in this. Around the world, speech coming from government officials has increasingly spilled over onto social media platforms. But companies like Meta and Twitter can decide to remove posts or entire accounts at any moment, regardless of how this might affect public access to information about state actors and institutions. Neither company has a policy on archiving state accounts, and, with a few exceptions, states don’t require companies to do this either.

In the mid-1990s, libraries, universities and governments around the world became concerned about losing electronic records to the fast-evolving digital sphere. But archiving from social media platforms has remained an “unloved” area of public policy, even as more and more government data has landed there, said William Kilbride, the executive director of the U.K.-based Digital Preservation Coalition, an advocacy group that works with public and private institutions around the world on archiving. 

Some governments with robust archiving capabilities deal with social media platforms on an individual basis to maintain records. The U.S. National Archives and Records Administration, for instance, has worked with Twitter to “freeze” previous versions of accounts linked to the presidency on the original platform. The U.K.’s National Archives maintains a social media database with Twitter and YouTube archives.

But major platforms have not created broader global policies around such programs, and they aren’t always transparent about how long they internally retain deleted or suspended accounts or those of deceased people. There are also technical challenges: Meta actively works to prevent scraping, a technique that archivists use to gather and then preserve such data. Finding automated ways to capture pages’ full context — such as comments on posts — is also “really difficult,” Kilbride said. 

Even within existing archival relationships, platforms still have the upper hand. After the January 6, 2021 riots in Washington, D.C., Twitter announced it would not allow a federally preserved version of former U.S. President Donald Trump’s @realDonaldTrump tweets — which the National Archives had been working to preserve — to appear on the site after it “permanently banned” him, as would be typical with other accounts linked to the presidency.

But after Elon Musk bought the company, the account was reinstated on Twitter. In an email exchange, the National Archives would not confirm whether its prior efforts to preserve @realDonaldTrump are ongoing but said to “continue to check back for addition[al] content as it is added in the future.” The handle does not appear alongside other accounts the agency has made separately available on its website.

“The public record has been privatized and now sits on these platforms,” Kilbride said. “Suddenly, it’s the National Archives’ or whoever’s job to try to figure out what on earth to do.”

“They have no duty of transparency,” Kilbride added of the platforms. “There’s no accountability.”

Although most governments have national archiving laws, many lack the resources to enforce them on social media or store mass amounts of data on independent servers, putting them at a further disadvantage in preserving material when accounts — or an entire platform — suddenly go dark.

In June 2021, former Nigerian President Muhammadu Buhari received a 12-hour suspension on Twitter over a tweet targeting Igbo people, one of the biggest ethnic groups in the country, writing he would “treat them in the language they understand.” Two days later, he blocked access to Twitter countrywide, making his own and other government-related accounts inaccessible within Nigeria.

‘Gbenga Sesan, the executive director of the digital rights group Paradigm Initiative Nigeria, said that in his circles, “nobody cared” at the time about preserving Buhari’s account locally: He was more focused on the thousands of requests for help accessing virtual private networks (VPNs) pouring in from across Nigeria. Plus, digital experts knew that a local block on Twitter didn’t mean the accounts had been lost, he said. 

In fact, as Nigerians continued to access Twitter with VPNs, Buhari’s account — mainly a place to share propaganda and party information — was little-missed for the roughly six months that Twitter was officially blocked. “I don’t ever remember going there to check what was said. A few times, I tweeted that silence was a better option for Buhari, because every time he speaks, the country gets angry,” Sesan said.

Still, Sesan wants to see social media platforms create archiving partnerships with governments on a global scale. But the Buhari episode also showed the need for a more expansive view of preservation: On its own, Buhari’s account would provide a slim portrait of Nigeria’s online history at the time. And what’s more, governmental partnerships would only work if both sides had mutual good will to preserve materials.

“You’ll find the digital aides sharing more historical facts than the president himself,” Sesan said.  “That, I think, is the major context when it comes to presidential archives and information: That kind of information also matters.”

Challenges with archiving also arise when it comes to posts that shine light on human rights abuses, war crimes and other atrocities that demand documentation in the service of future legal investigations and historical inquiry, particularly on Meta and YouTube. Last month, Meta’s Oversight Board called on the company to publicly address archiving practices in a decision about a video of Armenian prisoners of war. The video showed the faces of injured and deceased soldiers, raising questions about revealing the identities and locations of prisoners in conflict zones. Although the board agreed Meta was correct to leave up the content with a warning screen, it recommended the company commit to preserving evidence of atrocities, develop public protocols for preservation and explain how long it internally retains data and considers preservation requests. Meta has not yet responded publicly and did not respond to a request for comment.

In recent years, social media platforms have faced scrutiny for helping to spread hate speech and disinformation in places such as Myanmar, Kenya and India, making the platforms eager to appear quick to remove content or accounts spewing violent rhetoric. 

While deplatforming violent actors can be crucial to limiting offline violence, digital historians and researchers say it also causes public records to disappear from the internet before they have the chance to collect them.

The nonprofit Mnemonic grew out of the civil war in Syria and maintains four archives documenting evidence of potential human rights violations in Syria, Sudan, Yemen and Ukraine. As social media platforms have increased their use of automated tools that try to remove harmful content, the group has seen human rights-related material taken down more frequently, according to Maria Mingo, a policy and advocacy manager at Mnemonic. 

The organization stores its archives on independent servers. About one-quarter of the two million YouTube videos it has archived from Syria since 2014 have disappeared from YouTube itself. About one-tenth of the 2,000 Twitter accounts from the same archive have been removed during the same period.

In May 2023, Musk announced that Twitter would begin removing and archiving “inactive” accounts. Mingo said that rule could present a “huge problem” for jailed activists whose accounts, and the information they collected at great personal risk, suddenly disappeared.

“If the content is taken down so, so quickly — unless platforms preserve and are able to engage with relevant stakeholders about the existence of the content — we won’t be able to do anything with it, we won’t be able to request it, we won’t be able to in any way try to use it,” Mingo said. 

“We can’t preserve something that no longer exists, or that we don’t know exists,” she added.

In Cambodia, questions around preservation and collective memory have persisted for decades. During the 1970s, the Khmer Rouge genocide wiped out nearly one-quarter of the population. Images of torture, starvation and detention have become irreplaceable to “memorialize how things went wrong in that period,” said researcher Young.

Right now, Hun Sen’s Facebook page provides an unmatched record of the current regime, replete with personal exchanges between Hun Sen and his followers. But it also has a more reflective style. He has long favored posting black-and-white or old photographs of himself or family members, dating back to shortly after the Khmer Rouge era when he came to power. In a recent post after the account was reinstated, he shared an undated photograph of himself as a young man walking in a green, placid background, along with a message about the upcoming one-sided national election, which his party won in a landslide.

“Today is the last day of the party campaign, and also the day of great expectations for the Cambodian People’s Party in the upcoming election. I wish you all, the family of the ‘angel party,’ success countrywide,” the caption read, referring to the ruling party’s logo.

Such photographs have been central to the page for years, according to Young, who, over four years, has tracked the prime minister’s habit of contrasting black-and-white and color photos as a visual representation of his mythical political journey.

The idea is to show himself bringing Cambodia out of the darkness and into the light, a human representation of peace protecting the country from plunging back into civil war.

“As a copy of the history of Cambodia, maybe [his account] should be in a museum somewhere, in the next 40, 50 years, so the new generation can see,” Young said. “This is what Facebook was like, and this is what the prime minister was like, during the new era of digitalization.”

The post When Meta suspends influential political accounts, who loses? appeared first on Coda Story.

]]>
Lithuania goes after bots following spikes in pro-Russian propaganda https://www.codastory.com/authoritarian-tech/lithuania-russian-propaganda-online/ Tue, 18 Jul 2023 09:12:53 +0000 https://www.codastory.com/?p=45323 Lithuania’s parliament is looking to criminalize automated account activity – and to hold Big Tech accountable for the same

The post Lithuania goes after bots following spikes in pro-Russian propaganda appeared first on Coda Story.

]]>
Big surges in international attention are unusual for LRT, the public media broadcaster in Lithuania. But last June, that changed suddenly when it began reporting on Lithuania’s decision to enforce EU sanctions on goods in transit to Kaliningrad, a Russian enclave that depends on trade routes through neighboring Lithuania for around 50% of its imports. 

As Lithuania joined the ranks of countries across the globe imposing sanctions on Russia over the war in Ukraine, LRT saw an avalanche of likes and shares descend upon its Facebook page. Posts that would normally receive 40 or 50 views were getting tens of thousands of interactions. And roughly half of the comments posted by LRT’s suddenly enormous audience espoused pro-Russian and anti-Ukrainian sentiments — an unusual dynamic in a country where support for Ukraine has been strong since the first days of the invasion. Analysis by Debunk, a Lithuanian disinformation research group, later found that much of this activity was driven by accounts situated in either Asia or Africa. This was a coordinated effort, one that almost certainly relied on automated accounts or bots. 

Now, a bill moving through Lithuania’s parliament is attempting to rein in this kind of activity. Representatives are deliberating on a set of proposed amendments to the country’s criminal code and public information laws that would criminalize automated account activity that poses a threat to state security. 

Under the changes, it would become a crime to distribute “disinformation, war propaganda, [content] inciting war or calling for the violation of the sovereignty of the Republic of Lithuania by force” from “automatically controlled” accounts. Violators could face fines, arrest or even three years’ imprisonment, depending on the particulars of the content in question.

The legislation is also expressly written to hold major social media platforms accountable for this kind of activity. It would empower the Lithuanian Radio and Television Commission to issue content and account removal orders to companies like Meta and Twitter.

Proponents of the legislation argue that social media companies have been ineffective in the fight against digital disinformation in Lithuania. In an explanatory note, lawmakers said the amendments would “send a clear message to internet platforms that an ineffective or insufficient fight against this problem is unacceptable and has legal consequences.”

“Right now, there is no regulation or legislation against bots,” said Viktoras Dauksas, the head of Debunk, the disinformation analysis center. But, he noted, “you can deal with bot farms through dealing with Meta.”

Twitter is a target of the policy too. In January 2022, a month before the invasion, U.S.-based disinformation researcher Markian Kuzmowyczius uncovered a bot attack on Twitter that falsely claimed that the Kremlin was recalling its diplomatic mission to Lithuania due to malign U.S. influence in the country. Removing diplomats is often a signal that the threat level to a country is high.

More than Meta, Twitter has long been a hub for automated accounts of all kinds. This was a key talking point for Elon Musk, who vowed to tackle the problem of malicious bots once the company was in his possession. While the company’s account verification policy has zigged and zagged since Musk’s takeover, it also appears to be honoring more requests for content removals that it receives from governments than it did under Jack Dorsey — in what could be a boon for Lithuania.

As for Meta, what the company terms “coordinated inauthentic behavior” has long been a violation of company policy, but its track record on enforcing this rule is mixed. The proposed amendments in Lithuania are meant to put the company on notice so that it is prepared to respond to requests from Lithuanian authorities in this vein. This is nothing new for Meta, which has faced regulatory measures around the world that are intended to ensure that content on the platform adheres to national laws. But Lithuania is among the smallest of countries that has attempted to bring the company to heel in this style. 

Germany’s 2017 Network Enforcement Act, casually referred to by policymakers as “Lex Facebook,” requires platforms above a certain size to remove illegal content, including hate speech, within 24 hours of receiving notice or face fines that could easily rise to tens of millions of euros. India’s 2021 IT Rules require large platforms to establish offices in the country and to dedicate staff to liaise with government officials seeking content removals or user data. In each case, the company has ultimately opted to comply, and it’s easy to see why. India represents Meta’s largest national market worldwide — it is unquestionably in Meta’s best interest to stay in good standing with regulators. And Germany’s position within the EU would have made it politically risky for the company not to fall in line.

But can Lithuania expect the same results? In December, Meta responded to allegations that Facebook was blocking pro-Ukrainian content in Lithuania and even sent representatives to Vilnius, the Lithuanian capital, to discuss the matter with policymakers. But two months later, Meta issued a formal response to Lithuanian politicians insisting that the platform’s moderation principles were applied equally to both sides of the conflict and that the algorithm did not discriminate. The incident highlighted the small Baltic nation’s willingness to stand up to the tech giant as Facebook continues to be the most widely used platform in the country. But it also demonstrated Meta’s confidence in asserting its power in the region.

A month later, the heads of state from eight European countries, including Lithuania, wrote an open letter to tech firms calling on them to fight disinformation that “undermines” peace and stability.

Weeding out harmful bots is a complicated exercise in any country that wants to uphold freedom of expression. Although the proposed amendments would only apply to bots spreading information that is already prohibited under Lithuanian law, the criminalization of activity by an automated account still treads into relatively new territory. Lithuanian supporters of the two amendments, including Dauksas, argue that a clear line can be drawn between trolls, who are often people or profiles for hire, and bots, who Dauksas says should not be afforded human rights protections. Scholars like Jonathan Corpus Ong, an associate professor of global digital media at the University of Massachusetts, take a different stance. “Even in a bot farm, there are humans clicking the buttons and directing these armies of automated accounts. The distinction between human and automation is more nuanced and there are many layers of complicity,” he argues.

Speaking from the sidelines of TrustCon, a meeting of cyber security professionals in San Francisco, Ong was eager to stress that blunt force regulation is often not the answer to the complex set of challenges that arise when combatting bots. 

“We all agree that some regulation is necessary, but we need to be extremely careful about using punitive measures, which could create further harm,” he said. 

In Ong’s view, we need to be cautious about what kind of information is shared between platforms and governments and what data is exchanged between platforms and law enforcement agencies, all of which would depend on sustained levels of trust and transparency. While Lithuania is rated “Free” in Freedom House’s “Freedom in the World” report, such legislation could pave the way for new forms of censorship in countries where democracy is under pressure or has been eroded completely.

Underlying all of this is also a persistent dearth of independent research on these dynamics, research that would require full cooperation from companies like Meta and Twitter where the vast majority of operations like these play out. Calls for more transparency around bot and troll farms have been ongoing from analysts and scholars, but, so far, no social media platform has been open to independent audits of their own investigations, Ong said.

The post Lithuania goes after bots following spikes in pro-Russian propaganda appeared first on Coda Story.

]]>
Researchers say their AI can detect sexuality. Critics say it’s dangerous https://www.codastory.com/authoritarian-tech/ai-sexuality-recognition-lgbtq/ Thu, 13 Jul 2023 14:41:56 +0000 https://www.codastory.com/?p=45224 Swiss psychiatrists say their AI deep learning model can tell if your brain is gay or straight. AI experts say that’s impossible

The post Researchers say their AI can detect sexuality. Critics say it’s dangerous appeared first on Coda Story.

]]>
Between autonomous police dog robots, facial recognition cameras that let you pay for groceries with your smile and bots that can write Wordsworthian sonnets in the style of Taylor Swift, it is beginning to feel like AI can do just about anything. This week, a new capability has been added to the list: A group of researchers in Switzerland say they’ve developed an AI model that can tell if you’re gay or straight. 

The group has built a deep learning AI model that they say, in their peer-reviewed paper, can detect the sexual orientation of cisgender men. The researchers report that by studying subjects’ electrical brain activity, the model is able to differentiate between homosexual and heterosexual men with an accuracy rate of 83%. 

“This study shows that electrophysiological trait markers of male sexual orientation can be identified using deep learning,” the researchers write, adding that their findings had “the potential to open new avenues for research in the field.”

The authors contend that it “still is of high scientific interest whether there exist biological patterns that differ between persons with different sexual orientations” and that it is “paramount to also search for possible functional differences” between heterosexual and homosexual people. 

Is that so? When the study was posted on Twitter, it drew a strong reaction from researchers and scientists studying AI. Experts on technology and LGBTQ+ rights fundamentally disagreed with the prospect of measuring sexual orientation by studying brain patterns. 

“There is no such thing as brain correlates of homosexuality. This is unscientific,” tweeted Abeba Birhane, a senior fellow in trustworthy AI at Mozilla. “Let people identify their own sexuality.”

“Hard to think of a grosser or more irresponsible application of AI than binary-based ‘who’s the gay?’ machines,” tweeted Rae Walker, who directs the PhD in nursing program at the University of Massachusetts in Amherst and specializes in the use of tech and AI in medicine.

Sasha Costanza-Chock, a tech design theorist and the associate professor at Northeastern University, criticized the fact that in order for the model to work, it had to leave bisexual participants out of the experiment. 

“They excluded the bisexuals because they would break their reductive little binary classification model,” Costanza-Chock tweeted

Sebastian Olbrich, Chief of the Centre for Depression, Anxiety Disorders and Psychotherapy of the University Hospital of Psychiatry Zurich and one of the study’s authors, explained in an email that “scientific research often necessitates limiting complexity in order to establish baselines. We do not claim to have represented all aspects of sexual orientation.” Olrich said any future study should extend the scope of participants. 

“Bisexual and asexual individuals exist but are ‘simplified away’ by the Swiss study in order to make their experimental setup workable,” said Qinlan Shen, a research scientist at software company Oracle Labs’ machine learning research group who was among those criticizing the study. “Who or what is this technology being developed for?” they asked. 

Shen explained that technology claiming to “measure” sexual orientation is often met with suspicion and pushback from people in the LGBTQ+ community who work on machine learning. This type of technology, they said, “can and will be used as a tool of surveillance and repression in places of the world where LGBT+ expression is punished.” 

Shen also disagrees with the idea of trying to find a fully biological basis for sexuality. “I think in general, the prevailing view of sexuality is that it’s an expression of a variety of biological, environmental and social factors, and it’s deeply uncomfortable and unscientific to point to one thing as a cause or indicator,” they said.

This isn’t the first time a machine learning paper has been criticized for trying to detect signs of homosexuality. In 2018, researchers at Stanford tried to use AI to classify people as gay or straight, based on photos taken from a dating website. The researchers claimed their algorithm was able to detect sexual orientation with up to 91% accuracy — a much higher rate than humans were able to achieve. The findings led to an outcry and widespread fears of how the tool could be used to target or discriminate against LGBTQ+ people. Michal Kosinski, the lead author of the Stanford study, later told Quartz that part of the objective was to show how easy it was for even the “lamest” facial recognition algorithm to be trained into also recognizing sexual orientation and potentially used to violate people’s privacy. 

Mathias Wasik, the director of programs at All Out, has been campaigning for years against gender and sexuality recognition technology. All Out’s campaigners say that this kind of technology is built on the mistaken idea that gender or sexual orientation can be identified by a machine. The fear is that it can easily fuel discrimination. 

“AI is fundamentally flawed when it comes to recognizing and categorizing human beings in all their diversity. We see time and again how deep learning applications reinforce outdated stereotypes about gender and sexual orientation because they’re basically a reflection of the real world with all its bias,” Wasik told me. “Where it gets dangerous is when these systems are used by governments or corporations to put people into boxes and subject them to discrimination or persecution.”

The Swiss study was published in June, less than a month after Uganda’s president signed a new, repressive anti-LGBTQ+ law — one of the harshest in the world — that includes the death penalty for “aggravated homosexuality.” In Poland, activists are busy challenging the country’s “LGBTQ-free zones” — regions that have declared themselves hostile to LGBTQ+ rights. And the U.S. Supreme Court just issued a ruling that effectively legalizes certain kinds of discrimination against LGBTQ+ people. Identity-based threats against LGBTQ+ people around the world are clear and present. What’s less clear is whether AI should have any role in mitigating them.

The study’s researchers say that their work could help combat political movements advocating for conversion therapy by showing that sexual orientation is a biological marker.

“Our research is absolutely not intended for use in prosecution or repression — nor would it seem to be a practicable method for such abuse,” said Olbrich. “There is no proof that this method could work in an involuntary setting. It is a sad reality that many technologies can be misused; the ethical responsibility is to prevent misuse, not halt the progress of scientific study.”

He added that the study’s objective was to identify the neurological correlates — not causes — of sexual orientation, in the hope of gaining a more nuanced understanding of human diversity. 

“Our work should be seen as a contribution to the larger quest to comprehend the remarkable workings of our neurons, reflecting our behaviors and consciousness. We didn’t set out to judge sexual orientation, but rather to appreciate its diversity. We regret if people felt uncomfortable with the findings,” he said. 

“However true these good intentions might be,” said Shen, “I don’t think it erases the inherent potential harms of sexual orientation identification technologies.”

On Twitter, Rae Walker, the UMass nursing professor, was more blunt

“Burn it to the ground,” they said.

The post Researchers say their AI can detect sexuality. Critics say it’s dangerous appeared first on Coda Story.

]]>
Israel uses Palestine as a petri dish to test spyware https://www.codastory.com/authoritarian-tech/israel-spyware-palestine-antony-loewenstein/ Thu, 22 Jun 2023 10:41:55 +0000 https://www.codastory.com/?p=44680 Journalist Antony Loewenstein discusses how Israeli surveillance tech is tested in Palestine before being exported across the world

The post Israel uses Palestine as a petri dish to test spyware appeared first on Coda Story.

]]>
Israel is one of the world’s biggest suppliers of surveillance technology. Its defense companies provide spyware to everyone, from autocrats in Saudi Arabia to democrats in the European Union. It is an Israeli company that the widow of Washington Post columnist Jamal Khashoggi is suing for the hacking of her phone in the months leading up to her husband’s murder in the Saudi Arabian embassy in Istanbul. 

While Israeli companies are perhaps the most high-profile purveyors of spyware, several companies headquartered in the United States and in Europe also sell surveillance technology. And persistent regulatory inconsistencies and blindspots suggest that there is still considerable reluctance, globally, to legislate to prevent the misuse of such technology. In Europe, this week, countries including France, Germany and the Netherlands have been arguing for the need to install spyware to surveil journalists if security agencies deem it necessary. 

As governments vacillate over regulation, human rights abuses continue. Last month, Israel was reported to be using facial recognition technology software called Red Wolf to deliberately and exclusively track Palestinians. Journalist Antony Loewenstein was based for several years in East Jerusalem. In his new book, “The Palestine Laboratory,” he explores how Israel has turned Palestine into a testing ground for surveillance tools that Israeli companies then export to governments around the world. I spoke with Loewenstein, who lives in Australia, over the phone.

This conversation has been edited for length and clarity. 

When did the privatization of the Israeli defense industry begin and why was that an important moment?

For the first decade of Israel’s existence after 1948, it was all state run. The Six-Day War [in 1967], when Israel, in six days, took control of the West Bank and Gaza and East Jerusalem, really accelerated the defense industry. By the 1970s, there was a fairly healthy private Israeli arms industry. Some of the companies that had been public before were now private. But it’s important to remember that both in the past, and also now, with organizations like NSO Group, most of these companies are private in name only. They are arms of the state. 

They are used by the state to forward and pursue their diplomatic aims. In the last 10 or so years, Benjamin Netanyahu, the prime minister, and Mossad, the Israeli intelligence agency, have gone around the world to countries that are not friends with Israel and have held out Israeli spyware as a carrot. Basically, Israel is saying, ‘If you are friends with us, if you help us, if you join with us in the U.N. in certain ways, if you don’t criticize us so much, we will sell you this unbelievably effective spyware.’ And since the Russian invasion of Ukraine, there have been huge numbers of European countries and others desperately coming to Israel, wanting defense equipment to protect themselves from any potential Russian attack.

How has Israel’s tech industry changed borders across the world?

Maybe the most prominent example, although not particularly well known, is the Israeli surveillance towers on the U.S.-Mexico border. They were installed a number of years ago, and it doesn’t make much of a difference whether it’s a Democrat or a Republican in the White House. In fact, Biden is accelerating this technological border, so to speak, and the company that America has used is Elbit, which is Israel’s biggest defense company. They have done a lot of work in the West Bank and across the Israel-Gaza border. And the reason the U.S. used Elbit as a contractor was because they liked what Elbit was doing in Palestine. I mean, the company promotes itself as being ‘successful’ in Palestine.

Does this border technology change the willingness of states to commit violent acts?

I don’t think necessarily violence becomes less likely. But I think in some contexts, Israeli surveillance tech, what you see being tested on Palestinians, makes it far easier for regimes to not go down the path of killing people en masse. Instead, they just massively surveil their populations, which allows them to get all the information they potentially need without the need for the bad images, so to speak, of mass violence. However, I also think that with an almost inevitable surge in climate refugees and with global migration at its largest since World War II, a lot of nations will actually revert to extreme violence on their borders.

You can see what the EU has been doing in the last few years with the assistance of Israeli drones, unarmed drones. The EU has made the decision with Frontex, their border — so-called — security, to allow the vast majority of brown or black bodies on boats to drown. That’s a conscious political decision. They don’t feel that way about Ukrainian refugees. And just for the record, I think all people should be welcomed. But the European Union does not see it that way. And the idea that you could possibly in years to come have armed drones hovering over the Mediterranean, firing on boats, shooting boats out of the water, I think is very conceivable.

Does Israel’s defense industry pose a threat to its allies?

It does. To me, the relationship between Israel and the U.S. is like an abusive relationship. On the face of it, very close. I think they love each other. They’re expressing admiration for each other all the time. Without the financial, diplomatic and military support from the U.S., Israel would arguably not exist. And yet, according to the most accurate figure that I could find, every single day the NSA, America’s leading intelligence agency and the biggest intelligence agency in the world, has roughly 400 Hebrew speakers spying on Israel. Spying on their best friend. And rest assured, that works in reverse as well.

They don’t really trust each other. More importantly, in the last few years, the Biden administration has talked about trying to curtail the power of Israeli spyware. A year and a half ago, they sanctioned NSO Group, the company behind Pegasus. A lot of the media was saying, ‘Oh, this is fantastic, the White House is now taking spyware seriously.’ But I think that’s misunderstanding the issue. America doesn’t want competition. They don’t want a real challenge to their dominance in spyware. They’re pissed off that Israeli spyware, which has been sold to dozens and dozens of countries around the world, threatens their hegemony.

You wrote in the book about how the Covid pandemic has been a wake up call for Israelis to how they, too, are vulnerable to surveillance.

For many Israeli Jews, for many years, all the surveillance was happening over there. It was happening to Palestinians in the West Bank and East Jerusalem. Israeli Jews didn’t really feel it themselves. They were being surveilled, but they were either unaware of it or didn’t seem to care. During the pandemic, Israel had lockdowns like a lot of other countries. A lot of Israel’s biggest defense companies — Elbit and NSO Group — pivoted to developing various tools to supposedly fight the pandemic. But it was still mass surveillance, mass monitoring, which they now used within Israel itself. 

For the first time, a lot of Israeli Jews discovered that they themselves were being monitored, that their phones had been hacked. Eventually, the occupation always comes home. Slowly, Israeli Jews are waking up to the reality that what’s happening literally down the road in Palestine will inevitably bleed back into their own world.

The post Israel uses Palestine as a petri dish to test spyware appeared first on Coda Story.

]]>
Digital footprints on the dark side of Geneva https://www.codastory.com/authoritarian-tech/geneva-digital-surveillance/ Thu, 15 Jun 2023 14:59:10 +0000 https://www.codastory.com/?p=43823 Photographer Thomas Dworzak documents digital surveillance of daily life in one of Europe’s wealthiest cities

The post Digital footprints on the dark side of Geneva appeared first on Coda Story.

]]>

Digital footprints on the dark side of Geneva

For this photo essay, Magnum Photos President Thomas Dworzak traveled to Switzerland and documented the lives of Geneva residents along with the digital “footprints” they leave behind every day. Drawing on research by the Edgelands Institute that explored Geneva’s evolving systems of everyday surveillance, Dworzak sought to use photography to tell the story of how the digitalization of our daily lives affects — and diminishes — our security.

Special series

This is the second in a series of multimedia collaborations on evolving systems of surveillance in medium-sized cities around the world by photographers at Magnum Photos, data geographers at the Edgelands Institute, an organization that explores how the digitalization of urban security is changing the urban social contract, and essayists commissioned by Coda Story.

Our first essay examined surveillance on the streets of Medellín, Colombia.

He accompanied Geneva citizens in their daily routines while documenting the digital traces of their activities throughout the day. Dworzak researched the places that store our digital data and photographed them as well — an investigation that proved difficult and revealing of the lack of transparency surrounding the handling and storage of personal data.

To conclude the project, Dworzak sent each of his subjects a postcard from places where their digital information is stored: a simple way to demonstrate the randomness of where our digitally collected information ends up.

Thomas writes: 

Do citizens of Geneva understand how surveillance takes place in their daily lives? The relationship between surveillance and power can be understood as a contemporary version of the “social contract,” originally conceptualized by the Genevan philosopher Jean-Jacques Rousseau in his 18th century seminal work on democracies.

As a photographer, I needed to set the place: Geneva. I wanted to play on the dark side of the quaint, cute and affluent image of one of the world’s wealthiest cities and the world of international relations in which the Genevans are so often entangled.

I needed to trace the connection between life in this comfortable European city and the hidden paths of information that form underneath a surveilled daily life. I spent time with a variety of regular Genevan people, all voluntary participants in our project. I photographed their daily routines, marking whenever they would leave a “digital footprint” when using their phones, credit cards, apps or computers. With the help of the Edgelands team, I then identified corresponding data centers around the world where their information was likely to have been stored. I created a set of postcards using open-source applications like Google Earth and Google Street View. These “postcards from your server” were then sent back to the respective volunteers from the countries where these data centers were located, highlighting the far-flung places that our private data goes to when they perform a simple task such as buying groceries or a bus ticket.

Geneva, December 2022. Davide agreed to let me track his digital footprints. Here, he shows his ticket on a train.
Geneva, January 2023. Postcard from the server. Google Earth screenshot of the location of the server where the digital footprints of Davide may be stored. Although corporate security and privacy policies prevented us from pinpointing its precise location, we were able to get an approximate idea of where individuals’ data was hosted.
Geneva, January 2023. Postcard from the server. A postcard from a server that may hold Davide’s data was sent back to Davide. This postcard was sent from a server administered by CISCO, at Equinix Larchenstrasse 110, 65993 Frankfurt, Germany.
Geneva, January 2023. United Nations Plaza. The broken leg of the “Broken Chair” monument, a public statue in front of the UN Palais des Nations. The statue is a graphic illustration evoking the violence of war and the brutality of land mines. It has become one of the city’s most recognized landmarks.
Geneva, January 2023. Postcard from the server. Google Earth screenshot of the location of the server where the digital footprints of Hushita may be stored. BUMBLE Equinix Schepenbergweg 42, 1105 AT Amsterdam, Netherlands. Hushita is another volunteer who agreed to let me track her digital footprints.
Geneva, December 2022. The European Organization for Nuclear Research, known as CERN, is an intergovernmental organization that operates the largest particle physics laboratory in the world. Established in 1954, it is based in a northwestern suburb of Geneva. CERN is an official United Nations General Assembly observer and is a powerful model for international cooperation. The history of CERN has shown that scientific collaboration can build bridges between nations and contribute to a broader understanding of science among the general public. In 1989, the World Wide Web was invented at CERN by Sir Tim Berners-Lee, a British scientist.
Geneva, December 2022. Surveillance camera shop.
Geneva, January 2023. Postcard from the server. Google Street View. Screenshot of the location of the server where some of the digital footprints of Renata may be stored. Apple Data Center, Viborg, Denmark. Renata is another volunteer who agreed to let me track her digital footprints.
Geneva, November 2022. Proton corporate server in Geneva. ProtonMail is one of the world’s safest encrypted email services. Nicholas is another volunteer who agreed to let me track his digital footprints.
Geneva, November 2023. Renata uses a digital sports watch.
Geneva, ​December ​2022​. ​Digital footprints with Antoine. The bus stop near his flat is named after Jean-Jaques Rousseau’s “Contrat Social.” Antoine is another volunteer who agreed to let me track his digital footprints.
Geneva, December 2022. Jean-Jacques Rousseau Island. The Genevan philosopher’s fundamental work on democracies is based on the notion of a “social contract.” The Edgelands Institute’s Geneva Surveillance Report examines how the relationships between citizens and surveillance leads to a potential new social contract.
Geneva, January 2023. Postcard from the server. A postcard from the potential server location of Antoine’s digital footprint was sent back to him. This postcard was sent from the server location of GOOGLE MAPS Rue de Ghlin 100, 7331 Saint-Ghislain, Belgium.

The post Digital footprints on the dark side of Geneva appeared first on Coda Story.

]]>
How TikTok influencers exploit ethnic divisions in Ethiopia https://www.codastory.com/authoritarian-tech/tktok-ethiopia-ethnic-conflict/ Wed, 14 Jun 2023 13:29:06 +0000 https://www.codastory.com/?p=44479 Social media influencers in Africa’s second-largest country are helping to stoke conflict – and making money along the way

The post How TikTok influencers exploit ethnic divisions in Ethiopia appeared first on Coda Story.

]]>
When Ethiopians took to the streets in February in reaction to a highly politicized rift within the country’s Orthodox Tewahedo Church, government authorities temporarily blocked social media platforms. On the outside, it may have seemed like just another blunt force measure by an authoritarian state trying to quell social unrest. But the move was more keenly calculated than that — the rhetoric of social media influencers was having an outsized impact on how Ethiopians, both in the country and in Ethiopia’s politically influential diaspora, perceived what was happening.  Similar to other moments of intense social conflict amid Ethiopia’s civil war, TikTok became a ground zero for much of the conflict playing out online.

In early February, three archbishops of the Orthodox Tewahedo Church — one of the oldest churches in Africa that dates back to the 4th century — accused fellow church leaders of discriminating against the Oromo people, who constitute the largest ethnic group in Ethiopia’s population of 120 million. While church members come from a diverse array of ethnic backgrounds, worship services are predominantly conducted in the liturgical language of Ge’ez and in Amharic, which is a language primarily spoken by the Amhara people. Amharic is the dominant language of Addis Ababa, Ethiopia’s capital, and the working language of the federal government. This linguistic predilection underlines the cultural clout of Amharic. 

After the three archbishops — all of Oromo lineage — made their allegations of discrimination public, they were excommunicated by church authorities. They then declared their plans to form a breakaway synod, triggering an instant public outcry. The cleavages underlying Ethiopia’s civil conflict bubbled to the surface and devolved into violent skirmishes, resulting in a combined total of 30 fatalities in the southern Ethiopian town of Shashemene and in Addis Ababa.

But what was a serious political crisis for the church and for the country amounted to a prime opportunity for TikTok influencers seeking to spread their messages and turn a profit along the way.

A quick scroll through live sessions on TikTok reveals heated political discussions in Amharic, Oromo and Tigrinya, in which participants exchange barbs and strategize on how to confront their adversaries. Zemedkun Bekele is prominent among them. A self-proclaimed defender of the Orthodox Tewahedo Church, he is known for his forceful, admonitory videos that are often over an hour long. Bekele began broadcasting threats against the breakaway synod, claiming to have video evidence that its leaders had engaged in homosexual activity and threatening to release the tape to the public. Accusations like this resonate deeply in a nation steeped in conservatism, where homosexuality is viewed with considerable disdain.

A known social media influencer who had already been banned from both Facebook and YouTube in 2020 for violating their policies on hate speech and the promotion of violence, Bekele re-established himself on TikTok in February 2023, just in time to jump into the fray. Since then, he has amassed a dedicated audience of more than 203,000 TikTok followers, most of whom appear to be members of the Amhara ethnic group and followers of the Orthodox Tewahedo Church. 

In the midst of the crisis, Bekele also launched attacks against a senior church teacher, Daniel Kibret, who has become a staunch ally of Prime Minister Abiy Ahmed. Drawing on the fact that the prime minister comes from a mixed religious background (his father is Muslim and his mother is Christian), Bekele made unfounded claims that Kibret had secretly converted to Islam.

In a video with more than 19,000 views, Bekele maintained that he would not relent. “We will not back down without making sure the Ethiopian Orthodox Church is as big as the country itself. We will not back down without toppling Abiy Ahmed,” he said. “We will not back down without hanging Daniel Kibret upside down.” The video was posted on February 4, the same day that the three bishops declared their intentions to secede from the Church.

Another account called TegOromo also saw swift growth surrounding the church controversy. TegOromo has a passionate following and is on the opposite side of the conflict from Bekele. The person who runs the account has expressed support for the Oromo religious leaders who sought to establish the independent synod. The account’s moniker fuses the first three letters of “Tegaru” with “Oromo” — a calculated move to represent harmony between the Tigrayan and Oromo ethnic groups.

With more than 60,000 followers, TegOromo’s account is marked by overt threats, inflammatory language and aggressive rhetoric. One TikTok video urged supporters to “chop the Amharas like an onion.” This video was later removed from the platform, but copies of it remain accessible. In a live session, a TegOromo follower called on Oromo people to “kill all Amharas” and even specified that children should not be spared. TegOromo cheered him on, urging other followers to answer the call and take up arms.  

Despite the controversial nature of TegOromo’s content, the influencer’s popularity suggests a burgeoning trend. Republishing his material or circulating incisive and satirical clips featuring TegOromo has become a reliable strategy for Ethiopian content creators seeking higher engagement.

In another instance, the spotlight turned toward two emerging TikTok influencers, Dalacha and Betayoo, who garnered attention for their adept use of vitriol. In one video, Dalacha, who identifies as Oromo, launched a barrage of insults and sexual slurs at his rival TikToker, Maareg, who identifies as Amhara. The episode exemplified the depths to which Dalacha was willing to stoop in order to denigrate Maereg. Dalacha used language that reduced the Amhara community to mere cattle. It was intended only to amplify the prevailing animosity between the two ethnic groups.

In another video, Betayoo, who consistently identifies as Amhara, used similarly troubling language, employing both sexual and ethnic slurs. She directed her insults toward a rival TikToker who identifies as Tigrayan and who has publicly expressed disdain for the Amhara community. Betayoo’s actions escalated beyond targeting an individual. She proceeded to insult the entire Tigrayan community, expressing a desire for their eradication. 

Left: Zemedkun Bekele and his co-host celebrate achieving 60 million views. Right: A screenshot from TegOromo’s live session, subtitled in Amharic, in which he calls on his Oromo kin to eliminate the Amharas.

The videos I reference above also all contain clear violations of TikTok’s terms of service, yet they remain on the platform. TikTok’s Community Guidelines strictly prohibit hate speech or hateful behavior and promise to remove such material from their platform. Accounts and/or users that engage in severe or multiple violations of hate speech policies are promptly banned from the platform. Despite these guidelines, plenty of Ethiopians who have exhibited hateful behavior remain active on the platform and continue to produce content for significant numbers of followers.

When I approached TikTok staff members to alert them about the videos and ask them to comment for this piece, they did not respond. It is difficult to definitively prove that this kind of discourse directly contributes to violence on the ground. But it is clear that discussions of political violence and religious conflicts on TikTok often result in the spread of misinformation and amplify interethnic hatred. Clips containing these influencers’ offensive remarks have also seeped onto other platforms, such as YouTube and Facebook, where reposting or critiquing such content has become a low-effort method for content creators to gain engagement.

Given the sheer volume of such live streams, TikTok’s moderation team may be overwhelmed, struggling to monitor these discussions and remove inappropriate content. It is also worth noting that all of these accounts are run primarily in Amharic, Oromo or Tigrinya, languages that are spoken by millions of Ethiopians in and outside of the country but that have historically been underrepresented on major social media platforms. TikTok does not publicly disclose how many staff members or content moderators it employs for reviewing content in these languages.

All this engagement is not driven purely by political vitriol — it is also a pursuit of profit. The TikTok LIVE feature has seen a swift uptick in popularity among Ethiopian users, catalyzing an emergence of politically-minded influencers who reap economic rewards through virtual gifts. These gifts can be converted into TikTok “diamonds,” which are in turn redeemable for actual cash.

Crafting politically-charged clickbait, designed to fan the flames of ethnic and religious discord, is emerging as a common tactic for financial gain. It has had especially strong uptake among individuals in the Ethiopian diaspora. Many of the most impactful Ethiopian TikTok figures are actually located in Western nations. Zemedkun Bekele, for instance, lives in Germany.

Amid the ongoing crisis, Bekele proudly claimed to have received one of the most sought-after TikTok LIVE gifts — the lion, which translates to a little over $400 in real-world currency. He has prominently featured a video on his profile displaying a virtual lion roaring at the screen, serving as both a symbol of his influence and a testament to the economic gains that one can reap through this kind of engagement on TikTok.

In a 2021 essay, former New York Times media critic Ben Smith showed how TikTok’s algorithmic recommendation framework has helped to intensify cultural, linguistic and ideological divides among its global user base. The unfolding situation in Ethiopia could serve as a case study for Smith’s argument. With the videos I described — in addition to hundreds of others — the platform’s content dissemination strategy appears to inadvertently encourage distinct factions to isolate themselves and push each other to commit hate speech and even physical violence.

The rise of these online strongholds poses significant challenges to promoting inclusive, cross-cultural understanding that TikTok claims to want to foster. Users now risk becoming trapped in ideological echo chambers, detached from diverse perspectives and viewpoints, and increasingly vulnerable to politically-motivated disinformation.

At the core of the issue lies the question of accountability. What obligation does TikTok, and by extension other social media platforms, have to curtail the spread of divisive content, particularly when it is financially incentivized? And moreover, could the pursuit of profit from politically-charged content inadvertently pave the way for more extreme or hazardous content, potentially triggering threats of violence in real life?

In the end, for onlookers familiar with Ethiopian culture and politics, it is clear that the platforms that invite us to share our lives online are failing to mediate the complexities of the world they seek to engage with.

The post How TikTok influencers exploit ethnic divisions in Ethiopia appeared first on Coda Story.

]]>
Should countries build their own AIs? https://www.codastory.com/authoritarian-tech/legal-tools/sovereign-ai/ Fri, 09 Jun 2023 13:41:09 +0000 https://www.codastory.com/?p=44199 AI will soon touch many parts of our lives. But it doesn’t have to be controlled by big tech companies

The post Should countries build their own AIs? appeared first on Coda Story.

]]>
The generative AI revolution is here, and it is expected to increase global GDP by 7% in the next decade. Right now, those profits will mostly be swept up by a handful of private companies dominating the sector, with OpenAI and Google leading the pack.

This poses problems for governments as they grapple with the prospect of integrating AI into the way they operate. It’s likely that AI will soon touch many parts of our lives, but it doesn’t need to be an AI controlled by the likes of OpenAI and Google.

The Tony Blair Institute for Global Change, a London-based think tank, recently began advocating for the U.K. to create its own sovereign AI model — an initiative that some British media outlets have dubbed “ChatGB.” The idea is to create a British-flavored tech backbone that underpins large swaths of public services, free from the control of major U.S.-based platforms. Being “entirely dependent on external providers,” says the Institute, would be a “risk to our national security and economic competitiveness.”

Sovereign AIs stand in stark contrast to the most prominent tools of the moment. The large language models that underpin tools like OpenAI’s ChatGPT are built using data scraped from across the internet, and their inner workings are controlled by private enterprises.

In a 100-page “technical report” accompanying the release of GPT-4, its latest large language model, OpenAI declined to share information about how its model was trained or what information it was trained on, citing safety risks and “the competitive landscape” (read: “we don’t want competitors to see how we built our tech”). The decision was widely criticized. Indeed, the company could put its code out there and cleanse data sets to avoid posing any risk to individuals’ data privacy or safety. This kind of transparency would allow experts to audit the model and identify any risks it might pose.

Developing a sovereign AI would allow countries to know how their model was trained and what data it was trained on, according to Benedict Macon-Cooney, the chief policy strategist at the Tony Blair Institute.

“It allows you to — to some extent — instill your values in the model,” said Sasha Luccioni, a research scientist at HuggingFace, an open source AI platform and research group. “Each model does encode values.” Indeed, while 96% of the planet lives outside the United States, most big tech products are developed by a tiny, relatively elite group of people in the U.S. who tend to build technology encoded with libertarian, Silicon Valley-style ideals.

That’s been true for social media historically, and it is also coming through with AI: A 2022 academic paper by researchers from HuggingFace showed that the ghost in the AI machine has an American accent — meaning that most of the training data, and most of the people coding the model itself, are American. “The cultural stereotypes that are encoded are very, very American,” said Luccioni. But with a sovereign AI model, Luccioni says, “you can choose sources that come from your country, and you can choose the dialects that come from your country.”

That’s vital given the preponderance of English-language models and the paucity of AI models in other languages. While there are more than 7,000 languages spoken and written worldwide, the vast majority of the internet, upon which these models are trained, is written in English. “English is the dominant language, because of British imperialism and because of American trade,” said Aliya Bhatia, a policy analyst at the Center for Democracy & Technology, who recently published a paper on the issue. “These models are trained on a predominant model of English language data and carry over these assumptions and values that are encoded into the English language, specifically the American English language.”

A big exception, of course, is China. Models developed by Chinese companies are sovereign almost by default because they are built using data that is drawn primarily from the internet in China, where the information ecosystem is heavily influenced by the state and the Communist party. Nevertheless, China’s economy is big enough that it is able to sustain independent development of robust tools. “I think the goal isn’t necessarily that everything be made in China or innovated in China, but it’s to avoid reliance on foreign countries,” said Graham Webster, a research scholar and the editor-in-chief of the DigiChina Project at Stanford University’s Cyber Policy Center.

There are lots of ways to develop such models, according to Macon-Cooney, of the Blair Institute, some of which could become highly specific to government interests. “You can actually build large language models around specific ideas,” he explained. “One practical example where a government might want to do that is building a policy Al.” The model would be fed previously published policy papers going back decades, many of which are often scrapped only to be brought back by a successive government, thus building up the model’s understanding of policy that could then be used to reduce the workload on public servants. Similar models could be developed for education or health, says Macon-Cooney. “You just need to find a use case for your actual specific outcome, which the government needs to do,” he said. “Then begin to build up that capability, feed in the right learnings, and build that expertise up in-house.”

The European Union is a prime example of a supranational organization that could benefit from its vast data reserves to make its own sovereign AI, says Luccioni. “They have a lot of underexploited data,” she said, pointing to the multilingual corpus of the European Parliament’s hearings, for instance. The same is true of India, where the controversial Aadhaar digital identification system could put the vast volumes of data it collects to use to develop an AI model. India’s ministers have already hinted they are doing just that and have confirmed in interviews that AI will soon be layered into the Aadhaar system. In a multilingual country like India, that comes with its own problems. “We’re seeing a large push towards Hindi becoming a national language, at the expense of the regional and linguistic diversity of the country,” said Bhatia.

Developing your own AI costs a lot of money — which Macon-Cooney says governments might struggle with. “If you look at the economics side of this, I think there is a deep question of whether a government can actually begin to spend, let alone actually begin to get that expertise, in house,” he said. The U.K. announced, in its March 2023 budget, a plan to spend $1.1 billion on a new exascale supercomputer that would be put to work developing AI. A month later, it topped that up with an additional $124 million to fund an AI taskforce that will be supported by the Alan Turing Institute, a government-affiliated research center that gets its name from one of the first innovators of AI.

One solution to the money problem is to collaborate. “Sovereign initiatives can’t really work because any one nation or one organization is, unless they’re very, very rich, going to have trouble getting the talent to compute and the data necessary for training language models,”  Luccioni said. “It really makes a lot of sense for people to pool resources.” 

But working together can nullify the reason sovereign AIs are so attractive in the first place.

Luccioni believes that the European Union will struggle to develop a sovereign AI because of the number of stakeholders involved who would have to coalesce around a single position to develop the model in the first place. “What happens if there’s 13% Basque in the data and 21% Finnish?” she asked. “It’s going to come with a lot of red tape that companies don’t have, and so it’s going to be hard to be as agile as OpenAI.” Finland for its part has developed a sovereign AI project, called Aurora, that is meant to streamline processes for providing a range of services for citizens. But progress has been slow, mostly due to the project’s scale.

There’s also the challenge of securing the underlying hardware. While the U.K. has announced $1 billion in funding for the development of its exascale computer, it pales in comparison with what OpenAI has. “They have 27 times the size just to run ChatGPT than the whole of the British state has itself,” Macon-Cooney said. “So one private lab is many, many magnitudes bigger than the government.” That could force governments looking to develop sovereign models into the arms of the same old tech companies under the guise of supplying cloud computing to train the models — which comes with its own problems.

And even if you can bring down the computing power — and the associated costs — needed to run a sovereign AI model, you still need the expertise. Governments may struggle to attract talent in an industry dominated by private sector companies that can likely pay more and offer more opportunities to innovate.

“The U.K. will be blown out of the water unless it begins to think quite deliberately about how it builds this up,” said Macon-Cooney.

Luccioni sees some signs of promise for countries looking to develop their own AIs, with talented developers wanting to work differently. “I know a lot of my friends who are working at big research companies and big tech companies are getting really frustrated by the closed nature of them,” she said. “A lot of them are talking about going back to academia — or even government.”

The post Should countries build their own AIs? appeared first on Coda Story.

]]>
Turkey uses journalists to silence critics in exile https://www.codastory.com/authoritarian-tech/turkey-journalists-transnational-repression/ Thu, 08 Jun 2023 13:19:23 +0000 https://www.codastory.com/?p=44180 Using the language of press freedom, Erdogan has weaponized the media to intimidate Turkish dissidents abroad

The post Turkey uses journalists to silence critics in exile appeared first on Coda Story.

]]>
Early in the morning on May 17, the German police raided the homes of two Turkish journalists and took them into custody. Ismail Erel and Cemil Albay — who work for Sabah, a pro-government Turkish daily headquartered in Istanbul — were released after a few hours, but their arrests provoked strong condemnation in Turkey. Turkish President Recep Tayyip Erdogan, in the midst of a tight presidential race, told an interviewer that “what was done in Germany was a violation of the freedom of the press.”

The Big Idea: Shifting Borders

Borders are liminal, notional spaces made more unstable by unparalleled migration, geopolitical ambition and the use of technology to transcend and, conversely, reinforce borders. Perhaps the most urgent contemporary question is how we now imagine and conceptualize boundaries. And, as a result, how we think about community.

In this special issue are stories of postcolonial maps, of dissidents tracked in places of refuge, of migrants whose bodies become the borderline, and of frontier management outsourced by rich countries to much poorer ones.

The European Centre for Media Freedom also came out in support of the Sabah journalists, condemning the detention and demanding that press freedom be upheld. But Turkey itself is a leading jailer of journalists, ranked 165th out of 180 countries in the 2023 World Press Freedom Index published by Reporters Without Borders. And, according to German prosecutors, Erel and Albay were under investigation for the “dangerous” dissemination of other journalists’ personal data.

German authorities have legitimate concerns about the safety of Turkish journalists living in exile. In July 2021, Erk Acarer, a Turkish columnist, was beaten up outside his home in Berlin. Later that month, German authorities began investigating Turkish nationalist organized crime groups operating in Europe after the police found a hit list of 55 journalists and activists who had fled Turkey.

In September 2022, Sabah published information that revealed the location of Cevheri Guven’s home. It appears likely — though it has not been confirmed by German officials — that this was the reason for the arrests of Erel and Albay. Guven himself had been arrested in Turkey in 2015 and sentenced to over 22 years in prison. He was the editor of a news magazine that had published a cover criticizing Erdogan. Out on bail before his trial, Guven wrote that he gave his “life savings” to a smuggler to get him and his family out of Turkey. He now lives in Germany.

The ability of states such as Germany and Sweden to protect refugees, whether they are fleeing Turkey, China, Russia or Iran, has waned, as authoritarian leaders have become more brazen in using technology to stalk, bully, assault, kidnap and even kill dissidents. The Turkish state’s appetite for targeting critical voices abroad, especially those of journalists, has been growing for some time. As Erdogan’s government clamped down on media freedom at home, it has co-opted journalists working at government-friendly news outlets into becoming tools of cross-border repression. This has allowed the state to reach outside Turkey’s borders to intimidate journalists and dissidents who have sought refuge in Western Europe and North America.

Since last year, Sabah has revealed details about the locations of several Turkish journalists in exile. In October 2022, it published the address and photographs of exiled journalist Abdullah Bozkurt. The report included details about where he shopped. This was just a month after I met Bozkurt at a cafe in the Swedish capital, Stockholm, where he now lives. Bozkurt told me that he is constantly harassed online by pro-government trolls and because of the large Turkish immigrant population in Sweden, many of whom are Erdogan supporters, has been forced into isolation. It has had, he said, an adverse impact on his children’s quality of life.

Two years before Bozkurt’s personal information was leaked, in June 2020, Cem Kukuc, a presenter on the Turkish channel TGRT Haber, said of Bozkurt and other critical journalists: “Where they live is known, including their addresses abroad. Let’s see what happens if several of them get exterminated.” Just three months after that broadcast, Bozkurt was attacked in Stockholm by unidentified men who dragged him to the ground and kicked him for several minutes. “I think this attack was targeted,” Bozkurt told the Committee to Protect Journalists, “and is part of an intimidation campaign against exiled Turkish journalists with the clear message that we should stop speaking up against the Turkish government.” Bozkurt deleted his address and vehicle and contact information from the Swedish government’s registration system after the 2020 attack, but both Sabah and A Haber, another pro-government media outlet, still published his address last year.

Sabah and A Haber are both owned by the sprawling Turkuvaz Media Group. It is “one of the monopolistic hubs for pro-government outlets,” said Zeyno Ustun, an assistant professor of sociology and digital media and film at St. Lawrence University in the U.S. The group’s chief executive is Serhat Albayrak, the brother of a former government minister, Berat Albarak, who is also Erdogan’s son-in-law.

Turkuvaz says that its newspapers have a collective readership of 1.6 million. In April, a month before Turkey’s tense general election, in which Erdogan managed to secure his third term as president, Turkuvaz’s channel ATV was the most watched in the country.

A few days before the second round of the presidential election, in late May, I met Orhan Sali, the head of news at the English-language broadcaster A News and the head of the foreign news desk at A Haber. To enter Turkuvaz’s tall, glass-paneled headquarters on the outskirts of Istanbul, I had to pass through three security barriers. An assistant took me to Sali’s spacious office on the third floor. Sali, who was born in Greece, is small with an incongruously graying beard on his round, youthful face. He wore a crisp, white shirt. On a shelf near Sali’s desk sit a couple of awards, including at least one for “independent journalism,” he told me.

In the same breath, Sali also said, “We are pro-Erdogan, we are not hiding it.” He acknowledged that there is a risk in publishing the names of journalists critical of the Turkish government but said it was not unusual. “If you read the British tabloid newspapers,” he told me, “you will find tons of pictures, tons of addresses.” 

This is not entirely accurate, according to Richard Danbury, who teaches journalism at the City University in London. “It is not true,” he told me, “that even tabloids as a matter of course publish people’s addresses and photos of people’s houses, particularly if they have been at risk of being attacked.”

But Sali was unconcerned. He approached a panel of screens covering the wall. Some of these channels, he said, are hardline and totally supportive of Kemal Kilicdaroglu, the main opposition candidate in Turkey’s recent election. “All of them,” he told me, “are terrorists.”

In the lead up to the presidential election, Turkuvaz outlets such as A News and A Haber gave Kilicdaroglu little to no coverage. Erdogan, meanwhile, received extensive coverage, according to Reporters Without Borders. One pro-government channel, TRT Haber, gave Erdogan 32 hours of airtime compared to just 30 minutes for Kilicdaroglu.

Sali, who seems to have a penchant for deflecting criticism of Turkuvaz’s journalism by comparing it to that of the British press, told me he sees no problem with this lack of balance. “The BBC,” he said, “is supporting the ruler. Who is the ruler? The king. You cannot say anything against the king, can you?”

At least seven journalists who have had their addresses published by Turkuvaz outlets are alleged by Erdogan’s government to be followers of the Islamic cleric Fetullah Gulen, who is suspected of having orchestrated a failed coup against Erdogan in 2016. Since the coup attempt, Erdogan’s government has imprisoned hundreds of critics they refer to as “FETO terrorists,” a derogatory reference to Gulen supporters. Cevheri Guven — the editor whose address in Germany was published in Sabah in September 2022 — is often described in pro-government media as the Joseph Goebbels of FETO, a reference to the Nazi propagandist.

“The 2016 coup had a major effect on the media landscape in Turkey,” said Joseph Fitsanakis, a professor of intelligence and security studies at Coastal Carolina University. “At that point,” he told me, “Erdogan made a conscious decision, a consistent effort to pretty much wipe out any non-AKP voices from the mainstream media landscape.” The AKP, or the Justice and Development Party, was co-founded by Erdogan in 2001.

In October 2022, the Turkish parliament passed sweeping legislation curtailing free speech, including implementing a vaguely worded law that effectively leaves anyone accused of spreading false information about Turkey’s domestic and foreign security facing three years in prison.

Before Erdogan’s rise to power, Turkey did not enjoy total media freedom, said Ustun, the media professor at St. Lawrence University. But, she told me, during his 21 years in politics, “there has been a gradual demise of the media freedom landscape.” Following the widespread protests in 2013, referred to as the Gezi Park protests, and the 2016 coup attempt, “efforts to control the mainstream media as well as the internet have intensified,” she added. The overwhelming majority of mainstream media outlets are now under the control of Erdogan and his allies.

Henri Barkey, a professor at Lehigh University and an adjunct senior fellow at the Council for Foreign Relations, told me that Erdogan has “muscled the press financially” by channeling advertising revenues to pro-government outlets such as those owned by the Turkuvaz Media Group. Erdogan, Barky says, has also weaponized the law. “They use the judicial system to punish the opposition press for whatever reason,” he told me. “You look left and you were meant to look right, and in Turkey today that is enough.”

The media has, for years now, been used as a tool of transnational repression, says Fitsanakis. In 2020, for instance, the U.K. expelled three Chinese spies who had been posing as journalists. But, Fitsanakis adds, since Russia invaded Ukraine in February 2022, intelligence services in Europe and North America, fueled by a heightened awareness of the threat emanating from Moscow, have been collaborating more closely to remove Russian spies from within their borders. 

The actions of other diplomatic missions too are being more closely monitored. Turkey, one of the most prolific perpetrators of transnational repression, according to Freedom House, has found itself a target of Western surveillance, making it harder for the state to place intelligence operatives inside embassies. In lieu of this traditional avenue for embedding intelligence sources in foreign countries, Fitsanakis believes, governments are turning in greater numbers toward friendly journalists. “It’s the perfect cover,” Fitsanakis told me. “You have access to influential people, and you get to ask a lot of questions without seeming strange.”

Erdogan’s re-election, experts fear, could mean he will further clamp down on democratic freedoms. Barkey believes there will be a brain drain as more intellectuals and critics leave Turkey for more congenial shores. But the evidence suggests that an emboldened Erdogan can still reach them.

“We might see a lot more emphasis on silencing any kind of opposition to Erdogan in the coming years,” Fitsanakis told me. “And because much of the opposition to Erdogan is now coming from Turks abroad, that fight is going to transfer to European soil.”

The post Turkey uses journalists to silence critics in exile appeared first on Coda Story.

]]>
When your body becomes the border https://www.codastory.com/authoritarian-tech/us-immigration-surveillance/ Wed, 07 Jun 2023 13:30:38 +0000 https://www.codastory.com/?p=44047 Surveillance technology has brought U.S. immigration enforcement away from the border itself and onto the bodies of people seeking to cross it

The post When your body becomes the border appeared first on Coda Story.

]]>

When your body becomes the border

By the time Kat set foot in the safe house in Reynosa, she had already escaped death’s grip twice.

The first time was in her native Honduras. A criminal gang had gone after Kat’s grandfather and killed him. Then they came for her cousin. Fearful that she would be next, Kat decided she needed to get out of the country. She and her 6-year-old son left Honduras and began the trek north to the United States, where she hoped they could find a safer life.

It was January 2023 when the two made it to the Mexican border city of Reynosa. They were exhausted but alive, free from the shadow of the fatal threats bearing down on their family in Honduras.

The Big Idea: Shifting Borders

Borders are liminal, notional spaces made more unstable by unparalleled migration, geopolitical ambition and the use of technology to transcend and, conversely, reinforce borders. Perhaps the most urgent contemporary question is how we now imagine and conceptualize boundaries. And, as a result, how we think about community.

In this special issue are stories of postcolonial maps, of dissidents tracked in places of refuge, of migrants whose bodies become the borderline, and of frontier management outsourced by rich countries to much poorer ones.

But within weeks of their arrival, a cartel active in the area kidnapped Kat and her son. This is not uncommon in Reynosa, one of Mexico’s most violent cities, where criminal groups routinely abduct vulnerable migrants like Kat so they can extort their relatives for cash. Priscilla Orta, a lawyer who worked on Kat’s case and shared her story with me, explained that newly-arrived migrants along the border have a “look.” “Like you don’t know where you are” is how she put it. Criminals regularly prey upon these dazed newcomers.

When Kat’s kidnappers found out that she had no relatives in the U.S. that they could shake down for cash, the cartel held her and her son captive for weeks. Kat was sexually assaulted multiple times during that period. 

“From what we understand, the cartel was willing to kill her but basically took pity because of her son,” Orta told me. The kidnappers finally threw them out and ordered them to leave the area. Eventually, the two found their way to a shelter in Reynosa, where they were connected with Orta and her colleagues, who help asylum seekers through the nonprofit legal aid organization Lawyers for Good Government. Orta’s team wanted to get Kat and her son into the U.S. as quickly as possible so they could apply for asylum from inside the country. It was too risky for them to stay in Reynosa, vulnerable and exposed.

For more than a month, Kat tried, and failed, to get across the border using the pathway offered to asylum seekers by the U.S. government. She was blocked by a wall — but not the kind we have come to expect in the polarized era of American border politics. The barrier blocking Kat’s entry to the U.S. was no more visible from Reynosa than it was from any other port of entry. It was a digital wall.

Kat’s arrival at the border coincided with a new policy implemented by the Biden administration that requires migrants to officially request asylum appointments at the border using a smartphone app called CBP One. For weeks, Kat tried to schedule a meeting with an asylum officer on the app, as the U.S. government required, but she couldn’t do it. Every time she tried to book an appointment, the app would freeze, log her out or crash. By the time she got back into CBP One and tried again, the limited number of daily appointment slots were all filled up. Orta and her team relayed the urgency of Kat’s case to border officials at the nearest port of entry, telling them that Kat had been kidnapped and sexually assaulted and was alone in Reynosa with her child. The officers told them they needed to use CBP One. 

“It was absolutely stunning,” Orta recalled. “What we learned was that they want everybody, regardless of what’s happening, to go through an app that doesn’t work.”

And so Kat and her son waited in Reynosa, thwarted by the government’s impenetrable digital wall.

The CBP One app is intended to be used for scheduling an appointment with immigration services.

The southern​​ border of the U.S. is home to an expansive matrix of surveillance towers, drones, cameras and sensors. But this digital monitoring regime stretches far beyond the physical border. Under a program known as “Alternatives to Detention,” U.S. immigration authorities use mobile apps and so-called “smart technologies” to monitor migrants and asylum seekers who are awaiting their immigration hearings in the U.S., instead of confining them in immigrant detention centers. And now there’s CBP One, an error-prone smartphone app that people who flee life-threatening violence must contend with if they want a chance at finding physical safety in the U.S.

These tools are a cornerstone of U.S. President Joe Biden’s approach to immigration. Instead of strengthening the border wall that served as the rhetorical centerpiece of former President Donald Trump’s presidential run, the Biden administration has invested in technology to get the job done, championing high-tech tools that officials say bring more humanity and efficiency to immigration enforcement than their physical counterparts — walls and jail cells.

But with technology taking the place of physical barriers and border patrol officers, people crossing into the U.S. are subjected to surveillance well beyond the border’s physical range. Migrants encounter the U.S. government’s border controls before they even arrive at the threshold between the U.S. and Mexico. The border comes to them as they wait in Mexican cities to submit their facial recognition data to the U.S. government through CBP One. It then follows them after they cross over. Across the U.S., immigration authorities track them through Alternatives to Detention’s suite of electronic monitoring tools — GPS-enabled ankle monitors, voice recognition technology and a mobile app called SmartLINK that uses facial recognition software and geolocation for check-ins.

Once in the U.S., migrants enrolled in Alternatives to Detention’s e-monitoring program say they still feel enveloped by the carceral state: They may be out in the world and free to walk down the street, but immigration authorities are ever-present through this web of monitoring technologies.

The program’s surveillance tools create a “temporal experience of indefinite detention,” said Carolina Sanchez Boe, an anthropologist and sociologist at Aarhus University in Denmark, who has spent years interviewing migrants in the U.S. living under Alternatives to Detention’s monitoring regime.

“If you’re in a detention center, the walls are sort of outside of you, and you can fight against them,” she explained. But for those under electronic surveillance, the walls of a detention center reproduce themselves through technology that is heavily intertwined with migrants’ physical bodies. Immigration authorities are ever-present in the form of a bulky monitoring device strapped to one’s ankle or a smartphone app that demands you take a selfie and upload it at a certain time of day. People enrolled in Alternatives to Detention must keep these technologies charged and fully functioning in order to check in with their supervisors. For some, this dynamic transfers the role of an immigration officer onto migrants themselves. Migrants become a subject of state-sanctioned surveillance — as well as their own enforcers of it.

One person enrolled in Alternatives to Detention told Sanchez Boe that the program’s electronic monitoring tools moved the bars of a prison cell inside his head. “They become their own border guard, their own jailer,” Sanchez Boe explained. “When you’re on monitoring, there’s this really odd shift in the way you experience a border,” she added. “It’s like you yourself are upholding it.”

As the U.S. government transposes immigration enforcement to technology, it is causing the border to seep into the most intimate spheres of migrants’ lives. It has imprinted itself onto their bodies and minds.

The app that Kat spent weeks agonizing over is poised to play an increasingly important role in the lives of asylum seekers on America’s southern border. 

Most asylum requests have been on hold since 2020 under Title 42, a public health emergency policy that authorized U.S. officials to turn away most asylum seekers at the border due to the Covid-19 pandemic. In January 2023, the same month that Kat arrived in Reynosa, the Biden administration implemented a new system for vulnerable migrants seeking humanitarian exemptions from Title 42. The government directed people like Kat to use CBP One to schedule their asylum appointments with border officials before crossing into the U.S. 

But CBP One wasn’t built for this at all — it debuted in 2020 as a tool for scheduling cargo inspections, for companies and people bringing goods across the border. The decision to use it for asylum seekers was a techno-optimistic hack intended to reduce the messy realities at the border in the late stages of the pandemic.

But what started out as a quick fix has now become the primary entry point into America’s asylum system. When Title 42 expired last month, officials announced a new policy: Migrants on the Mexico side of the border hoping to apply for asylum must now make their appointments through CBP One. This new system has effectively oriented the first — and for many, the most urgent — stage of the asylum process around a smartphone app.

The government’s CBP One policy means that migrants must have a smartphone, a stable internet connection and the digital skills to actually download the app and make the appointment. Applicants must also be literate and able to read in English, Spanish or Haitian Creole, the only languages the app offers.  

The government’s decision to make CBP One a mandatory part of the process has changed the nature of the country’s asylum system by placing significant technological barriers between some of the world’s most vulnerable people and the prospect of physical safety.

Organizations like Amnesty International argue that requiring asylum seekers to use CBP One violates the very principle upon which the U.S. asylum laws were established: ensuring that people eligible for protection are not turned away from the country and sent back to their deaths. Under U.S. law, people who present themselves to immigration authorities on U.S. soil have a legal right to ask for asylum before being deported. But with CPB One standing in their way, they must first get an appointment before they can cross over to U.S. soil and make their case.

Adding a mandatory app to this process, Amnesty says, “is a clear violation of international human rights law.” The organization argues that the U.S. is failing to uphold its obligations to people who may be eligible for asylum but are unable to apply because they do not have a smartphone or cannot speak one of the three languages available on the app. 

And that’s nothing to say of the technology itself, which migrants and human rights groups working along the border say is almost irredeemably flawed. Among its issues are a facial matching algorithm that has trouble identifying darker skin tones and a glitchy interface that routinely freezes and crashes when people try to log in. For people like Kat, it is nearly impossible to secure one of the limited number of appointments that the government makes available each day. 

CBP One success stories are few and far between. Orta recalled a man who dropped to the ground and let out a shriek when he made an appointment. A group of migrants embraced him as he wept. “That’s how rare it is,” she said. “People fall to their knees and hold each other and cry because no one has ever gotten an appointment before.”

The week after Title 42 ended, I checked in with Orta. In the lead-up to the program’s expiration, the Biden administration announced that immigration officials would make 1,000 appointments available on CBP One each day and would lengthen the window of time for asylum seekers to try to book them. But Orta said the changes did not resolve the app’s structural flaws. CBP One was still crashing and freezing when people tried to log in. Moreover, the number of appointments immigration authorities offer daily — 1,000 across the southern border — is not  nearly enough to accommodate the demand triggered by the expiration of Title 42. 

“It’s still a lottery,” she sighed. “There’s nowhere in the app to say, ‘Hey, I have been sexually abused, please put me first.’ It’s just your name.”

Back in the spring, as Kat struggled with the app day after day, Orta and her colleague decided to begin documenting her attempts. She shared one of those videos with me, taken in early March. Kat — slight, in a black T-shirt — sat in a chair in Reynosa, fidgeting as she waited for CBP One’s appointment-scheduling window to go live. When it did, she let out a nervous sigh, opened the app and clicked on a button to schedule a meeting. The app processed the request for several seconds and then sent her to a new page telling her she didn’t get an appointment. When Kat clicked the schedule button again, her app screen froze. She tried again and again, but nothing worked. She repeated some version of this process every day for a week, while her attorneys filmed. But it was no use — she never succeeded. “It was impossible for her,” Orta said.

Kat is far from the only asylum seeker who has documented CBP One’s shortcomings like this. Scores of asylum seekers attempting to secure an appointment have shared their struggles with the technology in Apple’s App Store. Imagine the most frustrating smartphone issue you’ve ever encountered and then add running for your life to the mix. In the App Store, CBP One’s page features dozens of desperate reviews and pleas for technological assistance from migrants stranded in Mexico.

“This is just torture,” one person wrote. “My girlfriend has been trying to take her picture and scan her passport for 48 hours straight out of desperation. She is hiding in a town where she has no family out of fear. Please help!” Another shared: “If I could give negative stars I would. My family are trying to flee violence in their country and this app and the photo section are all that’s standing in the way. This is ridiculous and devastating.” 

The app, someone else commented, “infringes on human rights. A person in this situation loses to a mechanical machine!”

In Kat’s case, her lawyers tried other routes. They enlisted an academic who studies cartels’ treatment of women along the border to submit an expert declaration in her case. Finally, after more than six weeks of trying and failing to secure an appointment, Kat was granted an exception and allowed to enter the U.S. to pursue her asylum claim without scheduling an appointment on CBP One. Kat and her son are now safely inside the country and staying with a family friend. 

Kat was fortunate to have a lawyer like Orta working on her case. But most people aren’t so lucky. For them, it will be CBP One that determines their fates.

Biden administration officials claim that the tools behind their digitized immigration enforcement strategy are more humane, economical and effective than their physical counterparts. But critics say that they are just jail cells and walls in digital form.

Cynthia Galaz, a policy expert with the immigrant rights group Freedom for Immigrants, told me that U.S. Immigrations and Customs Enforcement, which oversees Alternatives to Detention, “is taking a very intentional turn to technology to optimize the tracking of communities. It’s really seen as a way to be more humane. But it’s not a solution.” 

Galaz argues that the government’s high-tech enforcement strategy violates the privacy rights of hundreds of thousands of migrants and their broader communities while also damaging their mental health. “The inhumanity of the system remains,” she said.

Alternatives to Detention launched in 2004 but has seen exponential growth under the Biden administration. There are now more than 250,000 migrants enrolled in the digital surveillance system, a jump from fewer than 90,000 people enrolled when Biden took office in January 2021. According to ICE statistics, the vast majority of them are being monitored through SmartLINK, the mobile phone app that people are required to download and use for periodic check-ins with the immigration agency. Migrants enrolled in this system face a long road to a life without surveillance, spending an average of 446 days in the program.

During check-ins, migrants enrolled in the program must upload a photo of themselves, which is then matched to an existing picture taken during their program enrollment using facial recognition software. The app also captures the GPS data of participants during check-ins to confirm their location.

The government’s increasing reliance on SmartLINK has shifted the geography of its embodied surveillance program from the ankle to the face. The widespread use of this facial recognition app is expanding the boundaries of ICE’s digital monitoring system, this time from a wearable device to something that is less visible but ever-more ubiquitous.

Proponents at the Department of Homeland Security say that placing migrants under electronic monitoring is preferable to putting them in detention centers as they pursue their immigration cases in court. But digitization raises a whole new set of concerns. Alongside the psychological effects of technical monitoring regimes, privacy experts have expressed concern about how authorities handle and store the data that these systems collect about migrants.

SmartLINK collects wide swaths of data from participants during their check-ins, including location data, photos and videos taken through the app, audio files and voice samples. An FAQ on ICE’s website says the agency only collects participants’ GPS tracking data during the time of their check-ins, but also acknowledges that it has the technical ability to gather location data in real-time from participants who are given an agency-issued smartphone to use for the program — a key concern for migrants enrolled in the program and privacy experts. The agency also acknowledges that it has access to enrollees’ historical location data, which it could theoretically use to determine where a participant lives, works and socializes. Finally, privacy experts worry that the data collected by the agency through the program could be stored and shared with other databases operated by the U.S. Department of Homeland Security, which oversees ICE — a risk the agency recently conceded in its first-ever analysis of the program.

Hannah Lucal, a technology fellow with the immigrant rights legal firm Just Futures Law, which focuses on the intersection of immigration and technology, has studied the privacy risks of Alternatives to Detention at length. She told me she sees the program’s wide-ranging surveillance as “part of a broader agenda by the state to control immigrant communities and to limit people’s autonomy over their futures and their own bodies.”

And the program’s continuous electronic monitoring has left some migrants with physical and psychological damage. The ankle monitors, Lucal said, “cause trauma for people even after they’ve been removed. They give people headaches and sores on their legs. It can be really difficult to bathe, it can be really difficult to walk, and there’s a tremendous stigma around them.” Meanwhile, migrants using SmartLINK have expressed to Lucal fears of being constantly watched and listened to. 

“People talked about having nightmares and losing sleep over just the anxiety that this technology, which is super glitchy, may be used to justify further punishment,” she explained. “People are really living with this constant fear that the technology is going to be used by ICE to retaliate against them.”

Alberto was busy at work when he missed two calls from his Alternatives to Detention supervisor. The 27-year-old asylum seeker had been under ICE’s e-monitoring system since he arrived in the U.S. in 2019. He was first given an ankle monitor but eventually transitioned over to the agency’s mobile check-in app, SmartLINK. Once a week, Alberto was required to send a photo of himself and his GPS location to the person overseeing his case. On those days, Alberto, who works with heavy and loud machinery, would stay home from his job to ensure everything went smoothly.

But one day this past spring, Alberto’s supervisor called him before his normal check-in time, while he was still at work. He didn’t hear the first two calls over the buzz of the room’s machinery. When things quieted down enough for Alberto to see another call coming in, he picked up. Fuming, Alberto’s supervisor ordered him to come to the program’s office the following day. 

“I told her, ‘Ma’am I have to work, I have three kids, I have to support them,’” he told me in Spanish. 

“That doesn’t matter to me,” the case worker replied. 

When Alberto showed up the next day, as instructed, he was told by his Alternatives to Detention supervisor that he had more than a dozen violations for missing calls and appointments — which he disputes — and he was placed on the ankle monitor once again. 

The monitor is bulky and uncomfortable, Alberto explained. In the summer heat, when shorts are in season, Alberto worries that people who catch a glimpse of the device will think he’s a criminal.

U.S. immigration authorities use GPS-enabled ankle monitors to track the movements of migrants enrolled in the Alternatives to Detention program.
Loren Elliot / AFP via Getty Images.

“People look at you when they see it,” he said, “they think that we’re bad.” The situation has worn on him. “It’s ugly to wear the monitor,” he told me. And it weighs even more heavily on him now that he is not sure when it will come off.

Over the past year, I’ve interviewed dozens of people with extensive knowledge of Alternatives to Detention, including immigration attorneys, researchers, scholars and migrants who are, or were, enrolled in the program. Those discussions, as well as an emerging body of research, suggest that Alberto’s reaction to the electronic monitoring he was exposed to is not uncommon. 

In 2021, the Cardozo School of Law published the most comprehensive study on the program’s effects on participants’ well-being, surveying roughly 150 migrants who wear ankle monitors. Ninety percent of people told researchers that the device harmed their mental and physical health, causing inflammation, anxiety, pain, electric shocks, sleep deprivation and depression. Twelve percent of respondents said the ankle monitor resulted in thoughts of suicide, and 40% told researchers they believed that exposure to the device left them with life-long psychological scars.

Berto Hernandez, who had to wear an ankle monitor for nearly two years, described the device as “torturous.” “Besides the damage they do to your ankles, to your skin, there’s this other implication of the damage it does to your mental health,” Hernandez said.

Hernandez, who uses they/them pronouns, immigrated with their parents to the U.S. from Mexico at age 10. In 2019, when they were 30 years old, they were detained by immigration officers and enrolled in Alternatives to Detention as their deportation case proceeded.

Hernandez was in college while they had to wear the monitor and told me a story about a time they drove to a student retreat with a peer a few hours away from their home in Los Angeles. All of a sudden, the ankle monitor started beeping loudly — an automatic response when it exits the geographic range determined by immigration authorities. 

“I had a full panic attack,” Hernandez told me. “I started crying.” Although they had alerted their case manager that they would be out of town, Hernandez says their supervisor must have forgotten to adjust their location radius. After the incident, Hernandez had a physical reaction every time the device made noise.

“Whenever the monitor beeped, I would get full on panic attacks,” they explained. “Shaking, crying. I was fearful that they were going to come for me.” Hernandez thinks the level of fear and lack of control is part of the program’s objectives. “They want you to feel surveilled, watched, afraid,” they said. “They want to exert power over you.”

Hernandez was finally taken off of the ankle monitor in 2021, after appealing to their case manager about bruises the device left on their ankles. Hernandez was briefly allowed to do check-ins by phone but will soon be placed on SmartLINK. They don’t buy the government’s message that these technologies are more humane than incarceration.

“This is just another form of detention,” they told me. “These Alternatives to Detention exert the same power dynamics, the same violence. They actually perpetrate them even more. Because now you’re on the outside. You have semi-freedom, but you can’t really do anything. If you have an invisible fence around you, are you really free?”

Once on SmartLINK, Hernandez will join the 12,700-plus immigrants in the Los Angeles area who are monitored through the facial recognition app. Harlingen, Texas, has more than double that amount, with more than 30,600 placed under electronic monitoring — more than anywhere else in the country. This effectively creates pockets of surveillance in cities and neighborhoods where significant numbers of migrants are being watched through ICE’s e-monitoring program, once again extending the geography of the border beyond its physical range. 

“The implication of that is you never really arrive and you never really leave the border,” Austin Kocher, a Syracuse University researcher focusing on U.S. immigration enforcement who has studied the evolving geography of the border, told me. Kocher says these highly concentrated areas of migrant surveillance are known as “digital enclaves”: places where technology creates boundaries that are often invisible to the naked eye but hyperpresent to those who are subjected to the technology’s demands. 

“It’s not like the borders are like the racial impacts of building freeways through our cities, and things like that,” he noted. “They’re kind of invisible borders.”

Administering all of this technology is expensive. The program’s three monitoring devices cost ICE $224,481 daily to operate, according to agency data.

On that end, there is one clear beneficiary to these expansions. B.I. Incorporated, which started out as a cattle-tracking company before pivoting to prison technology, is the government’s only Alternatives to Detention contractor. It currently operates the program’s technology and manages the system through a $2.2 billion contract with ICE, which is slated to expire in 2025. B.I. is a subsidiary of the GEO Group, a private prison company that operates more than a dozen for-profit immigrant detention centers nationwide on behalf of ICE. GEO Group earned nearly 30% of its total revenue from ICE detention contracts in 2019 and 2020, according to an analysis by the American Civil Liberties Union. Critics like Jacinta Gonzalez, an organizer with the immigrant’s rights group Mijente, say this entire system is corrupted by profit motives — a money-making scheme for the companies managing the detention system that sets up financial incentives to put people behind physical and digital bars.

And B.I. may soon add another option to its toolkit. In April, ICE officials announced that they are pilot testing a facial recognition smartwatch to potentially fold into the e-monitoring system — an admission that came just weeks after the agency released its first-ever analysis of the program’s privacy risks. In ICE’s announcement of the smartwatch rollout, the agency said the device is similar to a consumer smartwatch but less “obtrusive” than other monitoring systems for migrants placed on them. 

Austin Kocher, the immigration enforcement researcher, said that touting technologies like the smartwatch and the phone app as “more efficient” and less invasive than previous incarnations, like the ankle monitors, is tantamount to “techwashing” — a narrative tactic to gain support and limit criticism for whatever shiny new tech tool the authorities roll out.

“With every new technology, they move the yardstick and say, ‘Oh, this is justified because ankle monitors aren’t so great after all,’” Kocher remarked. For people like Kocher, following the process can feel like an endless loop. First, the government detained migrants. Then it began to release them with ankle monitors, arguing that surveillance was kinder than imprisonment. Then it swapped the monitors for facial recognition, arguing that a smartphone is kinder than a bulky ankle bracelet. Each time, the people in charge say that the current system is more humane than what it had in place last time. But it’s hard to know where, or how, it will ever end — and who else will be dragged into the government’s surveillance web in the meantime.

For people like Alberto, there is no clear end in sight. He doesn’t know when the monitor will come off. But he knows it won’t be removed until his supervisor gives the okay. It can’t malfunction if he wants to avoid getting in trouble again. And he can see his daughter is paying attention. 

Recently, she noticed the monitor and asked him what it was. Alberto tried to keep it light. “It’s a watch,” he told her, “but I wear it on my ankle.” She asked him if she could have one too. 

“No,” he replied. “This one is only for adults.”

The post When your body becomes the border appeared first on Coda Story.

]]>
Escaping China with a spoon and a rusty nail https://www.codastory.com/authoritarian-tech/uyghur-thailand-escape-xinjiang-jail/ Mon, 05 Jun 2023 12:57:17 +0000 https://www.codastory.com/?p=44030 How one Uyghur man fled Xinjiang via the notorious smugglers' road and broke out of a Thai prison

The post Escaping China with a spoon and a rusty nail appeared first on Coda Story.

]]>
On April 24, a 40-year-old Uyghur man was reported to have died in a detention center in Thailand. Just a couple of months earlier, in February, another Uyghur man in his forties died in the same center, where about 50 Uyghurs are currently held awaiting possible deportation to China. Over 200 Uyghurs were detained in Thailand in 2014, and about a hundred were estimated to have been deported to China where their lives were under threat. Activists and human rights groups in Germany and several U.S. cities recently protested outside Thai consulates, demanding the release of Uyghurs still held in detention centers.

Hundreds of Uyghurs fled China in 2014, as the Chinese authorities launched a crackdown on the Muslim-majority ethnic group native to the northwest region of Xinjiang. The aim, the government said, was to stamp out extremism and separatist movements in the region. The authorities called it the “strike hard campaign against violent terrorism” and created a program of repression to closely monitor, surveil and control the Uyghur population.

The authorities bulldozed mosques, saw any expression of religion as extremist and confiscated Qurans. By 2018, as many as one million Uyghurs had been sent to so-called “re-education” camps. Across the region, an extensive high-tech system of surveillance was rolled out to monitor every movement of the Uyghur population. This remains the case to this day, with the Chinese police in Urumqi, the capital of Xinjiang, reportedly requiring residents to download a mobile app which enables them to monitor phones. 

Back in 2014, Uyghurs seeking to flee the burgeoning crackdown were forced to take a notoriously dangerous route, known as the “smugglers’ road,” through Vietnam, Cambodia and Thailand into Malaysia — from there, they could reach Turkey. Though Malaysia had previously deported some Uyghur Muslims to China, in 2018, a Malaysian court released 11 Uyghurs on human rights grounds and allowed them safe passage to Turkey. By September 2020, despite Chinese anger, Malaysia declared it would not extradite Uyghurs seeking refuge in a third country. 

But before they could make it to Malaysia, many Uyghurs were detained by the immigration authorities in Thailand and returned to China. Human rights groups condemned the deportations, saying that Uyghurs returned to China “disappear into a black hole” and face persecution and torture upon their return. 

Hashim Mohammed, 26, was 16 when he left China. He spent three years in detention in Thailand before making a dramatic escape. He now lives in Turkey — but thoughts of his fellow inmates, who remain in Thai detention, are with him every day. This is his account of how he made it out of China through the smugglers’ road. 

Hashim’s Story 

On New Year’s Day, in 2019, I was released from immigration detention in Istanbul. It was late evening — around 10 p.m. It was the first time I had walked free in five years. And it was the end of my long journey from China’s Uyghur region, which I ran away from in 2014. 

It started back in the city of Urumqi in Xinjiang, 10 years ago now. I was 16 years old and had recently begun boxing at my local gym. In the evenings, I started to spend some time reciting and reading the Quran. The local Chinese authorities were beginning their mass crackdown on Uyghurs in the name of combating terrorist activity. Any display of religious devotion was deemed suspicious. 

The local police considered my boxing gym to be a sinister and dangerous place. They kept asking us what we were training for. They thought we were planning something. They started arresting some of the students and coaches at the gym. Police visited my house and went through all my possessions. They couldn’t find anything.

After some time, the gym closed — like lots of similar gyms all over the Uyghur region. People around me were being arrested, seemingly for no good reason. I realized I couldn’t live the way I wanted in my hometown, so I decided to leave. 

At that time, thousands of Uyghurs were doing the same thing. I had heard of a smugglers’ route out of China, through Cambodia, Vietnam, Thailand and eventually to Malaysia. From there, I’d be able to fly to Turkey and start a new life. We called it the “illegal way.” It’s very quick once you leave China, it only takes seven days to get to Malaysia. 

At the border leaving China, we met with the smugglers who would get us out. They stuffed around 12 of us into a regular car, all of us sitting on top of each other. I was traveling alone, I didn’t know anyone else in the car. 

I remember one guy, Muhammad, who I met in the car for the first time. He was from the same area as me. He was with his wife and two kids and seemed friendly. 

The road was terrifying. There was a pit of anxiety in my stomach as the smugglers drove through the mountainous jungle at night at breakneck speed. I watched the speedometer needle always hovering above 100 kmph (about 60 mph), and I couldn’t help thinking about how many people were in the car. We heard about another group, crossing the border into Cambodia in a boat, who nearly drowned. After just seven days, we reached Thailand and the border with Malaysia. We sat in the jungle, trying to decide what to do — we could try climbing the border fence. 

But we also saw a rumor on WhatsApp that if you handed yourself in to the Thai border police, they would let you cross the border to Malaysia and fly onward to Turkey within 15 days. People on the app were saying some Uyghurs had already managed it. At this point, we’d been sleeping outside, in the jungle, for days, and we believed it. We handed ourselves in, and the police took a group of us to a local immigration detention center in the Thai jungle. 

Fifteen days slipped by, and we began to realize that we’d made a terrible mistake. With every day that passed, our hope that we would get to Turkey slipped away a little further. No one came to help us. We were worried that the Thai authorities would send us back to China.  

I was put in a dark cell with 12 guys — all Uyghurs like me, all trying to escape China. Throughout our time in jail, we lived under the constant threat of being deported back to China. We were terrified of that prospect. We tried many times to escape.

I never imagined that I would stay there for three years and eight months, from the ages of 16 to 19. I used to dream about what life would be like if I was free. I thought about simply walking down the street and could hardly imagine it. 

There were no windows in the cell, just a little vent at the very top of the room. We used to take turns climbing up, using a rope made out of plastic bags, just to look through the vent. Through the grill, we could see that Thailand was very beautiful. It was so lush. We had never seen such a beautiful, green place. Day and night, we climbed up the rope to peer out through the vent. 

We knew that the detention center we were in was very close to the Thai border. One guy who I shared the cell with figured out something about the place we were in. The walls, he said, in this building built for the heat were actually very thin.

We managed to get hold of two tools. A spoon and an old nail. 

We began, painstakingly, to gouge a hole in the wall of the bathroom block. We took turns. Day and night, we had a rota and quietly scraped away at the wall, making a hole just big enough for a man to fit through. There was a camera in the cell, and the guards checked on us frequently. But they didn’t check the bathroom — and the camera couldn’t see into the bathroom area, either. 

We all got calluses and cuts on our hands from using these flimsy tools to try to dig through the wall. We each pulled 30-minute shifts. To the guards watching the cameras, it looked like we were just taking showers. 

The guys in the cell next door to ours were working on a hole of their own. We planned to coordinate our breakout at the same time, at 2 a.m. on a Sunday. 

We dug through as much of the wall as we could, without breaking through to the other side until the last moment. There was just a thin layer of plaster between us and the outside world. We drew numbers to decide who would be the first to climb out. Out of 12 people, I drew the number four. A good number, all things considered. My friend Muhammad, who I met on the journey to Thailand, pulled number nine. Not so good.

That Sunday, we all pretended to go to sleep. With the guards checking on us every few hours, we lay there with our eyes shut and our minds racing, thinking about what we were about to do.

Two a.m. rolled around. Quietly, carefully, we removed the last piece of the wall, pulling it inward without a noise. The first, second and third man slipped through the hole, jumped down and ran out of the compound. Then it was my turn. I clambered through the hole, jumped over the barbed wire below me and ran.

The guys in the next cell had not prepared things as well as us. They still had a thick layer of cement to break through. They ripped the basin off the bathroom wall and used it to smash through the last layer. It made an awful sound. The guards came running. Six more guys got out after me, but two didn’t make it. One of them was Muhammad. 

The detention center we were in wasn’t very high security. The gate into the complex had been left unlocked. We sprinted out of it, barefoot, in just our shorts and t-shirts, and ran into the jungle on the other side of the road, where we all scattered. 

I hid out for eight days in the jungle as the guards and the local police tracked us through the trees. I had saved some food from my prison rations and drank the water that dripped off the leaves in the humidity.

It’s impossible to move through the undergrowth without making a lot of noise — so when the police got close, we had to just stay dead still and hope they wouldn’t find us. At one point, we were completely surrounded by the police and could hear their voices and their dogs barking and see their flashlights through the trees. It was terrifying.

Finally, after days of walking and hiding in the undergrowth, we made it to Thailand’s border with Malaysia. It’s a tall fence, topped with barbed wire. I managed to climb it and jump over — but the guy I was with couldn’t make it. He was later caught and sent back to detention.

In total, there were 20 of us who had managed to break out of the Thai jail. Eleven made it to Malaysia. The others were caught and are still in the detention center in Thailand. 

After spending another year in detention in Malaysia, I was finally able to leave for Turkey. After two months in Turkish immigration detention, I walked free. I had spent my best years — from the age of 16 until 21 — in a cell. I feel such sorrow when I think of the others who didn’t make it. It’s a helpless feeling, knowing they’re still in there, living under the threat of being sent back to China. 

Now I have a good life in Istanbul. Every morning, I go to the boxing gym. I’d like to get married and start my own family here. But half of me lives in my home region, and my dream is to one day go back to my home country.

Muhammad, my friend who I met on the smuggler’s road, is still in the Thai jail. He’s such an open and friendly person, and he was like my older brother inside. When the hope drained out of me and I broke down, he always reassured me and tried to calm me down. He would tell me stories about the history of Islam and the history of the Uyghur people. I’ll always be grateful to him for that. I think about him, and the other Uyghurs still trapped in Thailand, all the time.

The post Escaping China with a spoon and a rusty nail appeared first on Coda Story.

]]>
Indian wrestlers say ‘me too’ but the BJP is not listening https://www.codastory.com/authoritarian-tech/india-wrestlers-protest/ Thu, 01 Jun 2023 13:39:31 +0000 https://www.codastory.com/?p=43962 Olympic medalist athletes are camped out on the streets of Delhi, alleging sexual harassment by a powerful politician

The post Indian wrestlers say ‘me too’ but the BJP is not listening appeared first on Coda Story.

]]>
On the morning of May 28, the Delhi police manhandled a group of high-profile Indian wrestlers, including Olympic medalists, into a police bus. Images of the athletes — the most prominent of whom were women — being shoved, roughed up and dragged along the streets went viral, causing anger and outrage in a country with very few individual medal winners at the highest levels of international sport. 

About a mile away, as the wrestlers were being violently restrained by the police, Indian Prime Minister Narendra Modi was opening the country’s new parliament building, estimated to have cost  $120 million, in a controversial ceremony that was boycotted by at least 19 opposition parties. The wrestlers were marching toward the building to draw attention to their cause when they were stopped. They had already been protesting for weeks at Jantar Mantar in central Delhi, a site designated for protests. But permission to protest outside the new parliament building, said the police, had been denied. 

For a little over a month, the wrestlers camped out at Jantar Mantar. They have alleged that Brij Bhushan Singh, arguably the single most powerful official in Indian wrestling over the last decade, has been sexually harassing young female wrestlers for years. The protesters include some of Indian wrestling’s biggest names — Sakshi Malik,the bronze medalist at the 2016 Rio Olympics, Vinesh Phogat, a medalist at the World Wrestling Championship in both 2019 and 2022, and Bajrang Punia, the bronze medalist at the 2020 Tokyo Olympics. Since it became an independent nation in 1947, India has won 30 Olympic medals, seven of them in wrestling. Medal-winning athletes are celebrated with fervor largely because there are so few of them in India.  

Brij Bhushan Singh, the man the wrestlers accuse of systematic sexual abuse, is a six-time member of parliament. He is an influential figure in the Bharatiya Janata Party, India’s ruling party.. Singh has the reputation of being a strongman who wields considerable political muscle in Uttar Pradesh, a vast northern state that is electorally crucial for keeping the BJP in power. In addition to his parliamentary duties, Singh has been the president of the Wrestling Federation of India since 2011. Though he was asked to temporarily step aside from his role at the Federation after the allegations came to light, he is still listed as its president on its website.

Brij Bhushan Singh, a six-time member of India’s Parliament and the president of India’s wrestling federation, has been accused of sexually harassing young female wrestlers for years. Photo by Deepak Gupta/Hindustan Times via Getty Images.

Since April 23, Indian wrestlers, including the sport’s biggest stars, have been living in a makeshift plastic tent and sleeping on mattresses laid out on the pavement. They have called for Singh’s dismissal from the Federation and for his arrest. “We have been sitting here asking for justice,” Vinesh Phogat told me. Their supporters point to the lack of action by the police, including delays in just registering a complaint, as evidence that the BJP is shielding Singh.

He has, the wrestlers say, been harassing young athletes, including at least one minor, for over a decade with impunity. When Sakshi Malik joined a training camp in the city of Lucknow in 2012, she told me, older wrestlers warned her that Singh “was not a good man, that he sexually harassed girls.” She described his predatory behavior as an open secret in the wrestling community. “The parents, the women’s coaches, the men’s coaches, everyone knew this was happening.” But, she added, he was so powerful that “no one had the courage to speak out against him.”

Phogat also told me that Singh would “harass almost every girl.” And that if the young women wrestlers resisted, Singh “would ruin their game” and subject them to “mental torture.” Many young women, Phogat said, “have left wrestling because of him.”

Paramjeet Malik, a former official physiotherapist of the Wrestling Federation of India, said he was aware that Singh harassed women. He told me that in 2014, three young wrestlers had confided in him that they had been sexually harassed by Singh. Malik lived with the athletes at the training camp in Lucknow that year. He told me that, on several occasions, he had noticed a car that he knew belonged to Singh stop at the camp to pick up women wrestlers. “I saw them leaving the camp at night, after eleven, or sometimes at midnight,” he told me. When he asked the girls what was going on, he said, some of them broke down and told him that they were being called to Singh’s residence in the city.

If they refused to go, Malik told me, they were told that Singh “would have their names removed from the camp’s list, that they would be declared unfit, that their careers would be ruined.” Some of these girls, he said, were under 18 and came from low-income backgrounds. Sport, to them and their families, was a way out of poverty. Malik said he made a written complaint to a senior coach at the camp but no action was taken. Malik alleges that when he spoke to the media about Singh’s behavior, he was fired. According to Malik, the coach who fired him admitted that he had been receiving calls from Singh himself. The coach warned Malik that Singh was a powerful man and that Malik’s life could be in danger if he persisted. “That very night,” Malik told me, “we had to flee the camp.”

Wrestler Sangeeta Phogat, part of a famous family of Indian wrestlers, was detained by Delhi police along with other protestors as they tried to march toward the new parliament building in Delhi on May 28, 2023. Photo by Sanjeev Verma/Hindustan Times via Getty Images.

The three star wrestlers leading the current protests — Sakshi Malik, Vinesh Phogat and Bajrang Punia — said they believed they had reached a level of recognition that finally empowered them to take on Singh and stop the abuse. The trigger, Malik told me, was when she heard that 10 women had been harassed by Singh after a recent junior world championship. A few of the young women spoke directly to Malik. She said she had to speak up. “Enough was enough,” Malik told me, “we didn’t want coming generations of women to have to face the same thing.” 

On April 21, seven women wrestlers, including a minor, filed police complaints against Singh at a Delhi police station. Their identities have not been publicly revealed. The women listed specific incidents of harassment between 2012 and 2022 and said they occurred at Singh’s official parliamentary residence in Delhi and during tournaments in India and abroad. The Indian Express newspaper reported that, in at least two complaints, the women described in detail how Singh touched them inappropriately on the pretext of checking their breath.

However, the Delhi police did not immediately register a case against Singh. The police in the Indian capital operate under the authority of India’s Home Ministry — as part of the federal, rather than local, government. India’s current home minister is Amit Shah, and he is effectively second only to Modi in the hierarchy of both the government and the BJP. 

When the police failed to take note of their complaint, the wrestlers filed a petition with the Supreme Court asking for a police probe. Only after the court intervened did the Delhi police register two complaints against Singh. One of these complaints was from a minor and filed under India’s stringent Protection of Children from Sexual Offenses Act — a guilty verdict under the act results in, at minimum, a five-year sentence.

Singh denies all allegations and says he is willing “to be hanged” if found guilty. He has called the wrestlers’ protests “politically motivated.” Over the last month, several leaders from India’s opposition parties have visited the wrestlers’ sit-in to extend solidarity. Singh has since described the athletes as “toys” in the hands of opposition parties.

“Sexual harassment is not a political issue,” Phogat told me. She said it was Singh who was trying to make their complaints about politics in a bid to “save himself.” The wrestlers, Phogat said, have put their careers on the line for their cause. “We have some respect, some standing in the country,” she told me. “Something must have happened for us to be here.”

Phogat pointed to the U.S. gymnast Simone Biles, who testified against the U.S. national gymnastics team’s doctor Larry Nassar — accused of sexual abuse by more than 100 women. “When Simone Biles spoke up against sexual harassment,” Phogat said, “did they call her political?” She described Singh as India’s Larry Nassar. “There are many Larry Nassars here,” she told me, “not just one, but at least we are taking on one now.”

Kavita Krishnan, a feminist activist and writer, says that the BJP is “backing their leader” in a “brazen and shameless” way. “The ruling party has not distanced itself from this man,” she told me. “I cannot remember so blatant a case of political protection.” She said Singh’s “political power” in Uttar Pradesh, which has 80 seats in the Indian parliament, more than any other state, is “the basis of very cynical calculations this government is making about keeping this guy around.”

Krishnan added that in a normal, healthy democracy, the wrestlers’ complaints would have caused huge political embarrassment. One of the primary reasons for the absence of pressure on the BJP, she said, was the lack of serious and sustained mainstream media coverage of the scandal. The BJP exercises its control, she said, not only through government bodies but also through one of its “main propaganda arms” — the media. “The control of the propaganda media over public opinion,” Krishnan said, is what “the government relies on” to shape public conversation. Most mainstream media, she said, are either neglecting the story or suppressing it. “The most influential media with the greatest reach, especially in non-English Indian languages,” Krishnan told me, “are, for the large part, totally batting for the BJP and Brij Bhushan Singh.” Vinesh Phogat told me that “national TV is making Singh the hero and us the villains.”

The wrestlers first held a public protest in Delhi in January 2023. At the time, the government persuaded them to call it off by forming an oversight committee to examine the allegations and by asking Singh to “step aside” from his role at the Wrestling Federation. By late April, though, the wrestlers felt they had no choice but to resume protests after they saw no serious action being taken against Singh. The oversight committee’s report wasn’t made public, and the athletes expressed a lack of faith in its functioning. 

Sakshi Malik told me that she believed the committee had given Singh “a clean chit,” which means effectively clearing him of all charges. The wrestlers claimed that Singh had also resumed overseeing tournaments in his area and was still calling the shots in the Federation, a sign of his political power.

To further show off his political clout, Singh has called for a mass rally on June 5 in the city of Ayodhya in Uttar Pradesh, a place sacred to Hindus. “On the appeal of the nation’s revered saints, a grand rally for public awareness,” reads a poster for the event, complete with an image of a Hindu god. Krishnan described the rally as an attempt by a BJP politician at “invoking Hindu identity” and “Hindu supremacist politics” to imply that he is innocent and deserves the support of all Hindus. Singh has claimed that over one million Hindu seers will attend. “Under the leadership of seers, we will force the government to change the law,” he declared, referring to India’s Protection of Children from Sexual Offenses Act. 

The wrestlers say that Singh has tried to intimidate the athletes who complained to the police. Malik told me that the minor in particular has been targeted. “Phone calls have been made to her parents,” Malik said. Strange cars have been spotted around her house at night.

Even as Singh has attempted grandstanding and deploying strong-arm tactics, the wrestlers have stood their ground. On May 28, the police detained the wrestlers for the day and arrested at least 700 others across the capital. With the wrestlers and their supporters held at different police stations, the authorities took the opportunity to clear their protest site and said they would no longer allow the month-long sit-in to continue. Delhi police also charged the wrestlers with “rioting” and “obstructing a public servant.” The wrestlers have since announced that they will begin an indefinite hunger strike. 

In the past few weeks, as the protests have intensified, the wrestlers have received support from student unions, women’s groups, labor unions, farmers’ collectives and even the International Olympic Committee. On the evening of May 23, nearly 500 people marched to India Gate, a war memorial in the heart of Delhi, as part of a candlelight protest in support of the wrestlers. Sakshi Malik stood on the edge of a police barricade and lit a candle, as hundreds gathered before her waving Indian flags. “This is a fight for India’s daughters,” she told the crowd. “We have to win this. And we will.”

The post Indian wrestlers say ‘me too’ but the BJP is not listening appeared first on Coda Story.

]]>
How an EU-funded agency is working to keep migrants from reaching Europe https://www.codastory.com/authoritarian-tech/icmpd-eu-refugee-policy/ Wed, 31 May 2023 13:32:51 +0000 https://www.codastory.com/?p=43634 The International Centre for Migration Policy Development is arming countries along European borders with surveillance tech and training to keep migrants out of Europe

The post How an EU-funded agency is working to keep migrants from reaching Europe appeared first on Coda Story.

]]>

How an EU-funded agency is working to keep migrants from reaching Europe

When he saw the Tunisian coast guard coming, Fabrice Ngo knew he wouldn’t make it to Italy that day. The young Cameroonian had pushed off from the shore of the Tunisian city of Sfax in a small metal boat with 40 others. They left under the cover of night alongside seven other boats. The small fleet motored north toward Italy, spread out, but all with the same destination. In the distance, the lights of seaside towns dotted the coastline.

The Tunisian coast guard found them two hours into their journey. As the police vessel approached, fear gave way to disbelief. The coast guards — in uniform and on an official ship — boarded the metal dinghy, dislodged and seized the boat’s motor and then sped off, motor in hand. The group of 40, most of them from West Africa, were left at sea with no motor. Panic ensued. Some began paddling with their bare hands.

The Big Idea: Shifting Borders

Borders are liminal, notional spaces made more unstable by unparalleled migration, geopolitical ambition and the use of technology to transcend and, conversely, reinforce borders. Perhaps the most urgent contemporary question is how we now imagine and conceptualize boundaries. And, as a result, how we think about community.

In this special issue are stories of postcolonial maps, of dissidents tracked in places of refuge, of migrants whose bodies become the borderline, and of frontier management outsourced by rich countries to much poorer ones.

“We didn’t know what to do. We couldn’t move forward. We started tearing up the fuel cans to paddle, everyone had their hands in the water,” Ngo told us. “Some brave ones undressed and jumped in the water to push the boat along.” (We have changed Ngo’s name to protect his safety.)

By mid-afternoon the following day, the boat had floated toward a small chain of islands off the coast of Sfax. Again, the Tunisian coast guard reappeared, towed the group farther out to sea and, again, left them floating at sea, still with no motor.

Then the weather started to turn — the waves grew choppy and water began to fill the dingy. 

“When we had advanced maybe 50 meters, that’s when the coast guard arrived,” Ngo told us. “They towed us back again in the middle, where the water is deep. The boat was getting weighed down by water. If it had continued to fill, we all would have died.”

Desperate for help, the group finally got the attention of a fishing boat that towed them to safety, ferrying them back to the coast near Sfax.

The Tunisian coast guard intercepted and then abandoned Ngo’s boat with the help of technology supplied by the European Union. In 2019, the EU inked a deal to provide nearly 20 million euros’ (about $21.4 million) worth of radar, undersea and airborne drones, radios and other technology, as well as training, to the government of Tunisia. EU officials made a similar agreement with Moroccan authorities. The Border Management Programme for the Maghreb region was designed to arm coast guard authorities in North Africa with new technology to be deployed along migration routes to Europe and to train them to use it. Tunisia recently surpassed Libya as the most heavily traveled route for irregular migration to Europe across the Mediterranean. 

Over the past decade, the EU has struck similar deals — exchanging hundreds of millions of euros worth of surveillance technology, other police equipment and accompanying training — with nearly every non-EU country that borders the bloc. At the center of these deals is the International Centre for Migration Policy Development, an innocuous-sounding international organization based in Vienna that has become one of the bloc’s go-to intermediaries for supplying surveillance equipment and training to police and coast guards in countries bordering the EU. 

The ICMPD’s clients are all either EU states or intergovernmental organizations — it receives more than half of its budget from the European Commission, the executive branch of the EU. Because the ICMPD is not a government institution, it can enable states to carry out operations along EU borders with much less transparency, accountability or regulation than what would be required of any EU government.

“The EU is breaking its own rules and values with the border regime we have built up: They partner with autocratic regimes and provide them with technology to use in the Mediterranean to keep people out,” said Ozlem Demirel, a member of the European Parliament from Germany. Demirel pointed to the ICMPD as an example of efforts by the European Commission to carry out this work with as little scrutiny as possible.

Spotlight: Morocco

ICMPD provided Moroccan authorities with technical surveillance systems from two companies, MSAB and Oxygen Forensics, as well as training on how to use the systems. Financed by the EU’s “Border Management Programme for the Maghreb Region,” the same instrument used to fund the Tunisian coast guard, the spyware’s official purpose is to combat irregular migration and human trafficking. However, the software, capable of extracting data from all smartphone types, could potentially be used for the surveillance of journalists and rights activists, as no checks are in place to prevent this.

In June 2022, 23 migrants mainly from sub-Saharan Africa were killed as Moroccan and Spanish police tried to stop them from crossing the European border in the city of Melilla, a Spanish territory in North Africa. Weeks later, the European Commission, ICMPD and Morocco signed a renewed partnership on migration committing to strengthen their relationship.

Hundreds of pages of documents we obtained through Freedom of Information requests, made primarily to the European Commission, shed light on the organization’s work along EU borders and go into minute detail about the ICMPD’s inner workings.

In 2019, the European Union provided top-of-the-line surveillance equipment to the state security service in Morocco, via the ICMPD, ostensibly to help the country tighten its borders and fight smuggling. But Moroccan authorities, already known for hacking the phones of independent journalists, activists and academics, could use this EU-provided technology to further perpetuate the same type of internal repression. In Libya, the ICMPD was paid by the EU to provide consulting services to Libyan migration authorities, including the Libyan Directorate for Combating Illegal Migration, which runs a network of detention centers that have been criticized by the U.N. human rights agency for the “unimaginable horrors” suffered by migrants detained there. In Bosnia, the ICMPD is building a new migration detention center. A spokesperson for Bosnia’s Ministry of Security told us that Bosnian authorities facilitated deportations to countries with which Bosnia has “good bilateral relations” but no deportation agreement. This is a dubious practice under international law.

And in Tunisia, the ICMPD is supplying technology and training to a coast guard that is increasingly being mobilized to carry out human rights abuses against migrants and refugees. The organization’s “Integrated Border Management Project” — funded by the EU and overseen by the German federal police — may look humanitarian on paper. But in practice, sources on the receiving end of the project say it is designed to prevent people from leaving Tunisia’s shores to seek refuge in Europe.

Ngo eventually made it to the other side of the Mediterranean, in another dinghy. We met him at a reception site for asylum seekers in northern Italy where he had befriended another asylum seeker — also from Cameroon. The two men fled the opposite sides of Cameroon’s civil war at more or less the same time. Ngo is from a French-speaking village and was forced to flee when his home was attacked by an English-speaking militia. His friend is an English-speaker and was a member of one of these same militias until his group was overrun by the Cameroonian military and he was forced to flee the country. But their paths have brought them together.

Both were able to flee Cameroon and make a home in Tunisia. For two years, Ngo worked as a car mechanic, while his friend worked in construction. Both lived in relative stability until, last February, they say they were forced to flee home, again.

In a televised speech on February 21, 2023, Tunisian President Kais Saied directly targeted Black Africans in Tunisia, referring to them as “hordes of illegal migrants” and charging them with carrying out “violence, crime and unacceptable actions.” Saied echoed the conspiracy theories of far-right political parties in both Tunisia and Italy, under Prime Minister Giorgia Meloni, that tie intracontinental immigration to a “criminal plan” to change the “demographic landscape.”

The speech triggered unrest across Tunisia, where Black people comprise about 10% to 15% of the population. Within days, many people from countries in sub-Saharan Africa who were living in Tunisia, as well as Black Tunisians, reported losing their jobs, being evicted from their homes and facing arbitrary detentions by the police and violent attacks by vigilante groups. After Saied’s speech, Ngo’s boss told him to go home and not to return. For the first time, Ngo considered leaving Tunisia.

“These attacks made many of us want to stay indoors all day, like a cat,” he said. “We couldn’t live in this condition — I was in Tunisia for two years and never imagined taking to the sea.”

“Everything changed after the president’s speech,” said Mohammed Salah, a refugee from Sudan who has lived in Tunisia since 2016. Salah hails from the Darfur region, which became the ground zero for a genocide carried out by Sudan’s notoriously brutal Janjaweed militia in the early 2000s. Granted refugee status in Tunisia two years after arriving, Salah has been working in construction ever since. But after Saied’s speech, he told us, “they fired me from my job, they kicked me out of my home. All because the president said that we don’t like Black people.”

Salah came to lead a movement of people that has, for two months, camped out in front of the offices of the U.N. High Commissioner for Refugees and the International Organization for Migration — in a plum neighborhood outside Tunis where many international organizations have their local headquarters. We spoke to Salah in April, just a few weeks after violence erupted in Khartoum, between the Sudanese army and the Janjaweed militia, which now calls itself the RSF. “I’ll go to Rwanda, to Europe, wherever,” Salah told us. “I just can’t go back to Sudan, especially now.”

Spotlight: Libya

ICMPD has been a key partner for the EU’s actions in Libya for years. Documents obtained via FOI shed light on these operations: In 2014, the organization published a white paper on the “legislative framework for migrant detention in Libya,” a strategy articulating how to better manage migration in Libya. ICMPD also began supplying technical equipment to Libya’s Interior Ministry, including for detention centers.

The Libyan Directorate for Combating Illegal Migration (DCIM) became infamous years later for the conditions inside its migration detention centers. Years of documentation from journalists and civil society organizations described a litany of abuses, solitary confinement, denial of water and food, torture, sale into enslavement and other human rights abuses that in 2018 the UN Human Rights agency called “unspeakable horrors.”

In March 2023, investigators from the UN published a report alleging numerous crimes against migrants carried out by Libyan authorities, including torture and enslavement. “There are reasonable grounds to believe migrants were enslaved in official detention centres well as ‘secret prisons,’ and that rape as a crime against humanity was committed,” wrote the UN investigators. “The ongoing, systematic, and widespread character of the crimes documented by the Mission strongly suggests that personnel and officials of the DCIM, at all levels of the hierarchy, are implicated.”

ICMPD documentation describes collaboration with the Libyan Directorate starting in 2014. The report goes on to assert that the Libyan Directorate “will be ICMPD’s primary counterpart in the project, as it has direct responsibility for the 19-20 detention centres in Libya.” Other program documents, also received through freedom of information requests, describe an ongoing collaboration with the Libyan Directorate. In those documents, it is repeatedly listed as a “key beneficiary” or a “target group” for European Union taxpayer money. In another document, a narrative report describing ICMPD operations in Libya between 2018 and 2019, the organization discusses its support for Libyan organizations to train DCIM agents. The training was aimed “Improving the rights of migrants in the detention/shelter centres by training agents of the ‘Anti-Illegal Immigration Agency.”

But traveling by sea is becoming an increasingly dangerous option for people like Salah, as the European Union expands its cooperation with the authorities in Tunisia, with the ICMPD serving as a middleman. Sources at human rights and development organizations told us they were concerned that European policy in Tunisia will follow that of neighboring Libya, where the bloc began providing support for a coast guard intended to intercept migrant boats in international waters and to bring them back to the country from which they had just fled. The EU has been internationally condemned for its support of the Libyan coast guard, border police and migrant detention system, which, since 2016, has detained tens of thousands of migrants under inhumane conditions. An investigation by Amnesty International presented ample evidence of migrants being subjected to torture, sexual violence and even extrajudicial killings.

Before 2020, the Tunisian coast guard had a humanitarian focus, explained Romdhane Ben Amor, the director of the Tunisian Forum for Economic and Social Rights, a Tunis-based human rights organization. But in the past three years — and, Ben Amor notes, since the EU began its support for the country’s border authorities — his organization has documented extensive human rights abuses at sea by the Tunisian coast guard, similar to those seen in Libya.

In 2019, the ICMPD began supporting the Tunisian coast guard with a host of technical equipment and training, paid for by the EU. They have radar, communications equipment and drones — everything they need to stop people from leaving Tunisia’s shores or to frighten them away.

A report by Ben Amor’s organization that will be published in June, to which we were given advanced access, details a pattern of abusive behavior by the Tunisian coast guard against migrants at sea. Dozens of interviews, including with shipwreck survivors and fishermen, demonstrate a pattern of abuse by the coast guard and show that it routinely fails to perform its duty to rescue migrants in distress. Researchers documented multiple incidents in which the coast guard deliberately provoked shipwrecks or stole motors from dinghies and left boats full of people adrift — which is exactly what happened to Ngo. The report offers figures that speak to the scale of these operations: Between January and April 2023, the Tunisian coast guard intercepted 19,719 migrants at sea. During the same period, 3,512 were arrested for “illegal stay.”

“The Europeans hide behind this organization,” Ben Amor told us. “So it’s not the European Union that does this, but it’s ICMPD, it’s an independent organization.”

“There is political pressure on the coast guard to prevent people from leaving, whatever the price, whatever the damage,” Ben Amor said. “That’s how the violence started, and the coast guard is responsible for a lot of violence.”

The ICMPD was established in 1993 in response to the fall of the Soviet Union. “We in Europe feared a mass invasion of Russians,” wrote Jonas Widgren, one of the ICMPD’s founders, in a 2002 academic paper. Widgren was frustrated by the lack of a coordinated response by European states to what he saw as a “never-ending asylum crisis.” At first, the ICMPD acted as a mix of a policy think tank and a diplomatic organization, facilitating dialogue among states on issues related to borders and migration and publishing policy briefs.

The organization grew steadily over the years, but the tipping point came in 2015, when more than one million people came to Europe having fled the civil war in Syria. That same year, the ICMPD appointed a new director, Michael Spindelegger, an Austrian conservative politician. According to multiple former colleagues and development insiders, Spindelegger had the political will and the right connections to, as one former employee put it, “make the most of the crisis.”

Spotlight: Bosnia

ICMPD has played an active role in the deportations taking place in the Western Balkans. In 2022 alone, 829 individuals were deported from Bosnia to countries like Bangladesh and Morocco. These deportations primarily targeted individuals who had been pushed back into Bosnia from Croatia, an EU member state that has been accused of engaging in violent deportations of asylum seekers. 

Despite lacking deportation agreements with these countries, Bosnia’s Ministry of Security described the deportations as a result of “good bilateral relations.” ICMPD has even facilitated meetings between Bosnian authorities and third countries on migration and border-related matters, including deportation. According to a representative from Bosnia’s Ministry of Security, who we spoke to for this story, the budget for these deportations came from funds that were earmarked as “Instrument for Pre-accession Assistance.” This is EU money that is allocated to help prospective member states meet the requirements to join the European Union.

ICMPD is involved in the construction of a controversial “detention unit” within the Lipa migrant camp near Bosnia’s border with Croatia. Bosnian authorities have faced criticism from humanitarian organizations due to their handling of asylum procedures, characterized by lengthy waiting times, high rejection rates, and a lack of adherence to the rule of law. According to data from the UN refugee agency, out of the 27,000 individuals who entered the country in 2022, none were granted refugee status.

“The European Union wanted to look like it was throwing money and equipment at the problem, basically throwing money to stop migrants,” recalled one former senior ICMPD employee, who spoke to us on the condition of anonymity for fear of professional repercussions. “Suddenly, there was all this funding available, which included border management training but also included equipment,” they said. “The European Commission can’t just hand over equipment to, say, the Moroccan government, so they need someone like the ICMPD to do it.” If the Commission were to try to push through this type of transaction without a middleman like the ICMPD, it would need the approval of the European Parliament. This can be hard to come by even in a favorable political climate. But the grim optics of reported abuses in Libya would likely draw unwanted scrutiny to the project and potentially jeopardize its approval.

Before coming to the ICMPD, Spindelegger held a series of top government jobs in Austria, including as finance minister and foreign minister. In 2015, he went on to chair the Agency for the Modernisation of Ukraine, a NGO funded by the pro-Russian Ukrainian businessman Dmytro Firtash. The following year, Spindelegger took the helm at the ICMPD.

Known for his neoliberal approach to migration policy, Spindelegger expanded the ICMPD into new regions and began training border guards and procuring technology and equipment for the police in most countries that border the EU bloc. With that expansion came a bigger budget, increasing from 16.7 million euros (about $18 million) in 2015 to 58 million euros ($62.2 million) in 2022. In 2022, 56% of the ICMPD’s budget came from the European Commission. Just three years ago, in 2020, the Commission provided 80% of the ICMPD’s budget.

“Our aim still is to be the go-to organisation for European states on all matters related to migration,” wrote Spindelegger in 2023. The organization also began running vaguely-defined “migrant resource centers,” primarily in South Asia and the Middle East, that appear to be focused on dissuading people from pursuing migration without documents.

The ICMPD operates for the European Commission under a funding scheme called “indirect management,” whereby EU work is outsourced to external agencies and the Commission isn’t involved in how projects are carried out. Several sources told us that this means the ICMPD isn’t subject to the same transparency and accountability measures that it would be otherwise.

“By externalizing this work to an organization outside of the European Union, the Commission is making this work far less accountable, working in a sort of legal gray area,” said Demirel, the German parliamentarian. “The farther this action is from European institutions, the less we can control it — Parliament can’t look at contracts from ICMPD.”

This disconnect is practical, said Jeff Crisp, who worked for the U.N. refugee agency for decades. He pointed to “serious ethical issues that ICMPD doesn’t seem to have addressed.”

“There is a disconnect between some of the language the organization uses and the activities it’s involved in,” said Crisp. “They are making things sound very technocratic and apparently quite neutral, whereas in fact they have very specific political purposes, which are often contradictory to human rights values.” Sources also expressed concern about the overlap between the ICMPD and the EU bureaucracy when it came to staffing. Six former ICMPD employees and European development insiders all described a revolving door of former European Commission employees coming to work at the ICMPD and vice versa.

A spokesperson for the European Commission told us that ICMPD operations “continuously undergo audits, assessments and evaluations with regard to their compliance with rules and regulations of the EU, including the respect of human rights.” The spokesperson did not address allegations that these operations are contributing to human rights violations.

Outside the office of the International Organization for Migration in Tunis, Tunisia’s capital, just over 100 people were still camped out in protest when we visited in April. The headquarters is surrounded by tall white gates, with a plaza containing a small tent city stretching down one side of the building. Three people argued with the security guard at the gate, while others sat in the shade of one of the plaza’s two palm trees. 

One man from Guinea, who asked that his name be withheld for his safety, said he had been camped outside the IOM building to ask for medicine and a way out of the country. After the Tunisian president’s February speech, he was attacked by a group of locals who robbed him of all his belongings. When we met in April, his eye was still noticeably swollen.

“The first time I tried to leave, I was pulled back to shore,” he told us. “The second time, they stopped me on the shores and put me in prison for six months,” he said. He showed us a gap where his tooth once was, which he said he lost after being beaten by guards in prison. “Now I’ve lost it all.”

Ben Amor, the director of the Tunisian Forum for Economic and Social Rights, says this kind of indiscriminate violence has become commonplace for migrants and refugees throughout Tunisia, especially following Saied’s racist speech. 

“We are in the middle of a humanitarian crisis in Tunisia,” Ben Amor said. “And at the same time, ICMPD continues its border management project here — so they equip the Tunisian coast guard with drones, with a radar and with other surveillance systems to keep people from leaving.”

“All of this work is being masked to look like protection,” said Gabriella Sanchez, a migration expert at Georgetown University. Sanchez argues that the European Union carries out border projects with the ICMPD and other third-party organizations deliberately, as a way of avoiding responsibility and accountability. “It is the creation of this illusion that by giving work to third parties, the EU isn’t directly involved and aren’t necessarily morally responsible for the consequences,” Sanchez told us.

With a border control budget that leapt from 12 billion euros (about $12.8 billion) in 2014-2020 to more than 23 billion ($24.6 billion) in 2021-2027, the European Union is almost literally doubling down on its efforts along the border.

Back at the reception site in northern Italy, Fabrice Ngo said he is lucky to have survived his journey over the sea. On the day of his rescue, the fisherman who spotted them attached a line to their metal dinghy and brought them back to the coast. From there, Ngo remembers, the fisherman went back out to find the other boats that had departed Sfax together with Ngo’s. It was then that he found out that the other boats had also been left without motors by the coast guard. 

“They pulled back every boat except one. One boat refused the rescue, and they were left at sea,” Ngo remembered, shaking his head. “That’s how they shipwrecked. Many people died.”

The post How an EU-funded agency is working to keep migrants from reaching Europe appeared first on Coda Story.

]]>
Imran Khan is fighting Pakistan’s army with Twitter https://www.codastory.com/authoritarian-tech/pakistan-imran-khan-social-media/ Thu, 25 May 2023 14:00:30 +0000 https://www.codastory.com/?p=43614 The arrest of the former Pakistani prime minister unleashed days of protest and has mired the country in a deep political crisis

The post Imran Khan is fighting Pakistan’s army with Twitter appeared first on Coda Story.

]]>
“This is the era of social media. You cannot suppress the truth,” said former Pakistani Prime Minister Imran Khan in a Twitter Space session attended by more than 200,000 users on May 22. “Will you put millions of people in jail? Are people not seeing what is happening?”

Imran Khan is famous in Pakistan for his savvy use of social media. It was instrumental in shaping his political image in the early 2000s and in building the campaign that brought him to power in August 2018. Throughout his premiership, social media was a key tool for Khan’s Pakistan Tehreek-e-Insaf party. But today, with Khan at the center of a conflict between political and military powers in Pakistan, social media too has become a space of bitter contention.  

Earlier this month, Khan was arrested on corruption charges by the Pakistan Rangers, a paramilitary force, while he was at the Islamabad High Court for a hearing. His arrest, on May 9, triggered nationwide protests and violent clashes between his supporters and the police resulting in at least eight deaths and dozens of injuries. Khan’s supporters had launched an arguably unprecedented attack on the Pakistani army and its institutions. In the city of Lahore, supporters set a mansion belonging to a senior military officer on fire. Since its formation as an independent state in 1947, Pakistan has spent over three decades, at various times, under military rule. Even when civilian governments have been in charge, the military has loomed in the background. Open defiance of the military’s hold on Pakistan is exceedingly rare.

In his latest Twitter Space event, Khan urged his supporters, whom he described as his “social media heroes,” to continue to stay strong in the face of an ongoing crackdown against him and workers from his political party, thousands of whom have faced arrests, been detained or are on the run. Pakistan, Khan said, is being governed by the “law of the jungle.”

Former Pakistani Prime Minister Imran Khan’s supporters protest his arrest in the northeastern city of Lahore on May 9, 2023. Photo by Arif Ali/AFP via Getty Images.

Technology has been central to Khan’s emergence as a leading politician. A decade after his PTI party formed in 1996, a group of tech-forward supporters built the party’s website — a first for any political party in Pakistan. At the time, PTI was derisively referred to as the “social media party,” and its leader was dubbed “Facebook Khan,” implying that the party lacked any real influence in a country dominated by the military and by warring political dynasties.

Strategic online campaigning, though, helped Khan’s PTI reach young people eager for change and for relief from the corrupt ruling elite. “Tabdeeli,” or change, trended on social media platforms across Pakistan. Inspired by Barack Obama’s 2008 presidential campaign, the PTI’s social media team were brimming with fresh, inventive ideas for how to leverage technology to market Khan. Soon, he was being referred to as Pakistan’s “Kaptaan,” Urdu for “captain,” a pointed reference to his glorious career as a cricket player.

By 2018, Khan’s social media machine was credited with delivering the party’s first victory in national elections. PTI’s digital politics marked a significant shift from the antiquated way in which Pakistan’s biggest parties conducted elections, from both the pre-poll targeting of voters to on-the-day mobilization of supporters.

It’s not only PTI that benefited from its strong online presence. The military strongly supported Khan. In fact, until Khan was removed from office in 2022, it was hard to distinguish between the online networks of the PTI and the Pakistani military. These digital warriors were easily distinguished by their use of the Pakistani flag to show their patriotism and by the manner in which they organized to promote positive news about Pakistan, highlight criticisms of India and counter Pakistanis they characterized as “traitors” because they dared to dissent from the state’s narrative.

Members of Imran Khan’s digital media team became participants in national security meetings with military advisers. Digital strategy was a key component of foreign policy discussions.

In a study published in August 2022, researchers found that the interests of PTI supporters and the Pakistani army converged. “Patterns of Twitter retweets and analysis of Facebook data provide important evidence,” the researchers wrote, “of a de facto coalition between the networks of the military and PTI.” Dissidents, they pointed out, “were largely drowned out by the mainstream political parties and military.”

Now, with the PTI in direct opposition to the Pakistani military, conflict between these institutions and their supporters is playing out actively online. When authorities blocked internet access amid protests earlier this month, it was an admission that it could not contain the outrage of PTI supporters.

After Khan’s arrest on May 9, the Pakistani government blocked access to broadband services and social media platforms for four days. Though the state regularly applies an internet kill switch to ostensibly quell unrest, this was the longest such shutdown in a country of 128 million internet users. The intent was to contain the outrage and perhaps to silence groups critical of the military’s role in Pakistani politics, which it entirely failed to do. 

While criticism of the military’s role in politics is not unprecedented, the scale of the recent wave of anti-military sentiment sparked by Khan’s arrest was extraordinary. And it was generated mostly through social media. After Khan was ousted from office last year, anti-army hashtags began to trend on social media platforms. The growing criticism and anger over the army’s role in removing Khan from office culminated in the violence earlier this month. The Pakistani civilian government, led by Prime Minister Shehbaz Sharif, has already declared that protestors who attacked military properties will be tried under army law — draconian legislation that is typically used to try enemies of the state.

The pressure on Khan’s supporters and particularly on members of his political party is taking its toll. In a high-profile departure, Khan’s former human rights minister Shireen Mazari quit the party on May 23. She had been arrested and then arrested again, even after she had been granted bail, an “ordeal,” she said, that “had an impact on my health.”

But silencing PTI is particularly challenging due to its global reach. Regardless of whether coverage of Khan’s public speeches and rallies are censored on mainstream media in Pakistan, PTI posts hourly updates and testimonials from PTI workers with English subtitles across social media platforms, often with the hashtag #ThisWasNotOnTV.

“The whole world is watching, politics is no longer restricted to streets,” said Jibran Ilyas, PTI’s social media lead and a cybersecurity expert based in Chicago. When mobile internet networks were down in Pakistan, Ilyas organized an online campaign to request that residents based in protest areas make their Wi-Fis public to help PTI members upload footage on social media and share updates with the rest of the team.

Though, according to Khan, 10,000 party workers and most of the PTI leadership are under arrest or on the run, PTI’s digital team is still online. Fearing imminent arrest and speaking from an undisclosed location, a PTI worker told me they didn’t sleep for several days after Khan was arrested. “One of our team members was shot in the leg during protests and underwent a six-hour surgery. Even then, they were still posting updates on social media,” said another member of the PTI social media team. On TikTok, in the four days between Khan’s arrest and bail hearing, the PTI’s official account reached over 100 million people and the team put out 164 videos, revealed a recent report.

With its digital support and global reach, can PTI’s online coalition be dismantled? “It is possible PTI can sustain its social media mobilization in the face of censorship, calibrated shutdowns and a general crackdown, which may intensify,” said Asfandyar Mir, an academic who published the 2022 paper noting the existence of  the “de facto coalition” between the army and PTI that led to Khan becoming prime minister. 

As for the military, the country is once again papered with pro-army posters. They have also been successful in coercing some PTI leaders to quit the party and pressuring supporters to issue forced apologies online. The Pakistan defense minister revealed that the government is considering banning the PTI because it has “attacked the very basis of the state.” And there is evidence that the state is shutting down internet services within a five-kilometer radius of Khan’s house in the city of Lahore to make it difficult for him to address his supporters online. “We are in uncharted territory for Pakistani politics and its intersection with digital mobilization,” Mir told me.

The future of Khan and his party is in the balance. But whether he, or his party, withstand the pressure, a key question remains unanswered: The people may be fearful of the state, but are they still respectful of its institutions?

The post Imran Khan is fighting Pakistan’s army with Twitter appeared first on Coda Story.

]]>
Utah’s online porn law puts teens’ digital rights at risk https://www.codastory.com/authoritarian-tech/legal-tools/utah-age-verification-law/ Tue, 23 May 2023 13:32:40 +0000 https://www.codastory.com/?p=43555 The law raises critical questions about young people’s rights to information and the privacy implications of checking IDs at websites’ virtual doors

The post Utah’s online porn law puts teens’ digital rights at risk appeared first on Coda Story.

]]>
A new Utah law intended to keep kids from accessing pornography and other kinds of “harmful material” online is raising critical questions about the First Amendment rights of young people and the privacy implications of checking IDs at websites’ proverbial doors.

Policymakers who pushed for the law say it will help protect kids from mental health issues and other risks that can arise from viewing certain kinds of material online. But what counts as “harmful,” exactly? The law is aimed at pornography, but it extends to virtually any commercial website with content that does not have “literary, artistic, political or scientific” value for minors and that makes up more than a third of all material on that site. With the law now in effect this month, anyone in Utah can sue violators if a minor accesses content on their website. Nonprofit-run sites, search engines and news-gathering organizations are exempt from liability.

Utah passed another law earlier this year to regulate minors’ access to social media platforms, which requires teens to get parental consent to use the platforms and prohibits them from using such platforms between 10:30 p.m. and 6:30 a.m. Research has shown that social media can be harmful to the mental health of young people. But age verification and time curfews might not be the best solutions to the problems that lawmakers have trained their focus on.

Heidi Tandy, a lawyer and First Amendment speech researcher, says it’s worrying that policymakers don’t consider kids’ rights. “It’s very clear that there is a sliding scale of First Amendment rights for those under the age of 18,” Tandy told me.

On Twitter, Electronic Frontier Foundation researcher Jason Kelley pointed out that while politicians tend to frame these laws as protection for children, they apply to everyone under the age of 18. He warned against “lumping in 10 year olds with seventeen year olds who can work, apply for emancipation, and drive.” Older teens are much more likely to seek out sexual content online — and have legitimate reasons to do so — than kids who are just nine or 10 years old.

Kelley suggested that the Utah law might end up pushing well past pornography to cover things like sexual education materials or fiction that includes sexual themes.

“The goal is often not just to remove or block what most of us would consider adult content, but go beyond that,” he told me. Kelley says advocates have “a certain reasonable fear that larger swaths of sites would be swept up in the law.”

“You’ve seen that, with definitions of what’s considered pornography or adult content in places like Florida, they’re removing books from libraries,” Kelley said, referring to recent legislation targeting books that are considered “sexually explicit” or that deal with gender identity, sexuality and related subjects.

Experts and adult film industry voices have also noted that these restrictions could send teens toward more obscure sites that parents or policymakers might not be aware of. This is a key argument that Pornhub makes, the Canada-based adult content site that consistently ranks among the most popular websites in the world. Earlier this month, Pornhub blocked access to videos on its site for all users based in Utah to show its opposition to the law. People in Utah who tried accessing the site were instead redirected to a video featuring adult film actress Cherie DeVille explaining the company’s objections to the law. Among other things, she noted that it could lead teens to sites with no protections against videos depicting things like sexual violence or child abuse.

There’s good reason to believe that rules like the one in Utah will soon spread to much of the U.S. The state of Louisiana was the first ever to implement this type of age verification law. A raft of similarly-worded age verification bills, what Kelley calls “copycat laws,” have been introduced in six states so far this year. All of these policies ostensibly require websites to introduce technical mechanisms for checking a user’s age before they can access that site’s content. Although the Louisiana law provides special guidance on this, Utah hasn’t established a standard for how sites should digitally verify a user’s age.

How should websites card their users, exactly? Utah recently implemented a pilot project for making driver’s licenses accessible on a mobile device that comes with an annual subscription cost. A digital ID works in tandem with a state’s motor vehicle administration, so it can be considered a valid form of identification for buying alcohol or other age-restricted products.

Louisiana, meanwhile, requires sites to use “commercial age verification systems.” And there’s a burgeoning industry waiting to serve sites in states with these requirements. Third-party services like FaceTec and Yoti offer biometrics-driven age verification software that websites can pay for, but this software can be costly to buy and maintain, making it difficult for small businesses to comply with the law.

All of the solutions on the table thus far raise significant concerns about the privacy of young people’s data. As Coda has reported in the past, using biometrics to verify someone’s identity or assess their age often requires sites to hold troves of personal data that can become vulnerable to breaches or even targeted abuse, harming users in the near term or for years ahead.

“People who want age verification laws have the best interest of teenagers at heart,” Tandy told me. But, she said, “I don’t think they’re thinking through the privacy ramifications of what they’re asking for.”

The post Utah’s online porn law puts teens’ digital rights at risk appeared first on Coda Story.

]]>