Surveillance - Coda Story https://www.codastory.com/authoritarian-tech/surveillance/ stay on the story Thu, 30 Nov 2023 10:26:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 https://www.codastory.com/wp-content/uploads/2019/07/cropped-LogoWeb2021Transparent-1-32x32.png Surveillance - Coda Story https://www.codastory.com/authoritarian-tech/surveillance/ 32 32 When deepfakes go nuclear https://www.codastory.com/authoritarian-tech/ai-nuclear-war/ Tue, 28 Nov 2023 14:01:33 +0000 https://www.codastory.com/?p=48430 Governments already use fake data to confuse their enemies. What if they start doing this in the nuclear realm?

The post When deepfakes go nuclear appeared first on Coda Story.

]]>
Two servicemen sit in an underground missile launch facility. Before them is a matrix of buttons and bulbs glowing red, white and green. Old-school screens with blocky, all-capped text beam beside them. Their job is to be ready, at any time, to launch a nuclear strike. Suddenly, an alarm sounds. The time has come for them to shoot their deadly weapon.

Why did we write this story?

AI-generated deepfakes could soon begin to affect military intelligence communications. In line with our focus on authoritarianism and technology, this story delves into the possible consequences that could emerge as AI makes its way into the nuclear arena.

With the correct codes input, the doors to the missile silo open, pointing a bomb at the sky. Sweat shines on their faces. For the missile to fly, both must turn their keys. But one of them balks. He picks up the phone to call their superiors.

That’s not the procedure, says his partner. “Screw the procedure,” the dissenter says. “I want somebody on the goddamn phone before I kill 20 million people.” 

Soon, the scene — which opens the 1983 techno-thriller “WarGames” — transitions to another set deep inside Cheyenne Mountain, a military outpost buried beneath thousands of feet of Colorado granite. It exists in real life and is dramatized in the movie. 

In “WarGames,” the main room inside Cheyenne Mountain hosts a wall of screens that show the red, green and blue outlines of continents and countries, and what’s happening in the skies above them. There is not, despite what the servicemen have been led to believe, a nuclear attack incoming: The alerts were part of a test sent out to missile commanders to see whether they would carry out orders. All in all, 22% failed to launch.

“Those men in the silos know what it means to turn the keys,” says an official inside Cheyenne Mountain. “And some of them are just not up to it.” But he has an idea for how to combat that “human response,” the impulse not to kill millions of people: “I think we ought to take the men out of the loop,” he says. 

From there, an artificially intelligent computer system enters the plotline and goes on to cause nearly two hours of potentially world-ending problems. 

Discourse about the plot of “WarGames” usually focuses on the scary idea that a computer nearly launches World War III by firing off nuclear weapons on its own. But the film illustrates another problem that has become more trenchant in the 40 years since it premiered: The computer displays fake data about what’s going on in the world. The human commanders believe it to be authentic and respond accordingly.

In the real world, countries — or rogue actors — could use fake data, inserted into genuine data streams, to confuse enemies and achieve their aims. How to deal with that possibility, along with other consequences of incorporating AI into the nuclear weapons sphere, could make the coming years on Earth more complicated.

The word “deepfake” didn’t exist when “WarGames” came out, but as real-life AI grows more powerful, it may become part of the chain of analysis and decision-making in the nuclear realm of tomorrow. The idea of synthesized, deceptive data is one AI issue that today’s atomic complex has to worry about.

You may have encountered the fruits of this technology in the form of Tom Cruise playing golf on TikTok, LinkedIn profiles for people who have never inhabited this world or, more seriously, a video of Ukrainian President Volodymyr Zelenskyy declaring the war in his country to be over. These are deepfakes — pictures or videos of things that never happened, but which can look astonishingly real. It becomes even more vexing when AI is used to create images that attempt to depict things that are indeed happening. Adobe recently caused a stir by selling AI-generated stock photos of violence in Gaza and Israel. The proliferation of this kind of material (alongside plenty of less convincing stuff) leads to an ever-present worry any image presented as fact might actually have been fabricated or altered. 

It may not matter much whether Tom Cruise was really out on the green, but the ability to see or prove what’s happening in wartime — whether an airstrike took place at a particular location or whether troops or supplies are really amassing at a given spot — can actually affect the outcomes on the ground. 

Similar kinds of deepfake-creating technologies could be used to whip up realistic-looking data — audio, video or images — of the sort that military and intelligence sensors collect and that artificially intelligent systems are already starting to analyze. It’s a concern for Sharon Weiner, a professor of international relations at American University. “You can have someone trying to hack your system not to make it stop working, but to insert unreliable data,” she explained.

James Johnson, author of the book “AI and the Bomb,” writes that when autonomous systems are used to process and interpret imagery for military purposes, “synthetic and realistic-looking data” can make it difficult to determine, for instance, when an attack might be taking place. People could use AI to gin up data designed to deceive systems like Project Maven, a U.S. Department of Defense program that aims to autonomously process images and video and draw meaning from them about what’s happening in the world.

AI’s role in the nuclear world isn’t yet clear. In the U.S., the White House recently issued an executive order about trustworthy AI, mandating in part that government agencies address the nuclear risks that AI systems bring up. But problem scenarios like some of those conjured by “WarGames” aren’t out of the realm of possibility. 

In the film, a teenage hacker taps into the military’s system and starts up a game he finds called “Global Thermonuclear War.” The computer displays the game data on the screens inside Cheyenne Mountain, as if it were coming from the ground. In the Rocky Mountain war room, a siren soon blares: It looks like Soviet missiles are incoming. Luckily, an official runs into the main room in a panic. “We’re not being attacked,” he yells. “It’s a simulation!””

In the real world, someone might instead try to cloak an attack with deceptive images that portray peace and quiet.

Researchers have already shown that the general idea behind this is possible: Scientists published a paper in 2021 on “deepfake geography,” or simulated satellite images. In that milieu, officials have worried about images that might show infrastructure in the wrong location or terrain that’s not true to life, messing with military plans. Los Alamos National Laboratory scientists, for instance, made satellite images that included vegetation that wasn’t real and showed evidence of drought where the water levels were fine, all for the purposes of research. You could theoretically do the same for something like troop or missile-launcher movement.

AI that creates fake data is not the only problem: AI could also be on the receiving end, tasked with analysis. That kind of automated interpretation is already ongoing in the intelligence world, although it’s unclear specifically how it will be incorporated into the nuclear sphere. For instance, AI on mobile platforms like drones could help process data in real time and “alert commanders of potentially suspicious or threatening situations such as military drills and suspicious troop or mobile missile launcher movements,” writes Johnson. That processing power could also help detect manipulation because of the ability to compare different datasets. 

But creating those sorts of capabilities can help bad actors do their fooling. “They can take the same techniques these AI researchers created, invert them to optimize deception,” said Edward Geist, an analyst at the RAND Corporation. For Geist, deception is a “trivial statistical prediction task.” But recognizing and countering that deception is where the going gets tough. It involves a “very difficult problem of reasoning under uncertainty,” he told me. Amid the generally high-stakes feel of global dynamics, and especially in conflict, countries can never be exactly sure what’s going on, who’s doing what, and what the consequences of any action may be.

There is also the potential for fakery in the form of data that’s real: Satellites may accurately display what they see, but what they see has been expressly designed to fool the automated analysis tools.

As an example, Geist pointed to Russia’s intercontinental ballistic missiles. When they are stationary, they’re covered in camo netting, making them hard to pick out in satellite images. When the missiles are on the move, special devices attached to the vehicles that carry them shoot lasers toward detection satellites, blinding them to the movement. At the same time, decoys are deployed — fake missiles dressed up as the real deal, to distract and thwart analysis. 

“The focus on using AI outstrips or outpaces the emphasis put on countermeasures,” said Weiner.

Given that both physical and AI-based deception could interfere with analysis, it may one day become hard for officials to trust any information — even the solid stuff. “The data that you’re seeing is perfectly fine. But you assume that your adversary would fake it,” said Weiner. “You then quickly get into the spiral where you can’t trust your own assessment of what you found. And so there’s no way out of that problem.” 

From there, it’s distrust all the way down. “The uncertainties about AI compound the uncertainties that are inherent in any crisis decision-making,” said Weiner. Similar situations have arisen in the media, where it can be difficult for readers to tell if a story about a given video — like an airstrike on a hospital in Gaza, for instance — is real or in the right context. Before long, even the real ones leave readers feeling dubious.

Ally Sheedy and Matthew Broderick in the 1983 MGM/UA movie “WarGames” circa 1983. Hulton Archive/Getty Images.

More than a century ago, Alfred von Schlieffen, a German war planner, envisioned the battlefield of the future: a person sitting at a desk with telephones splayed across it, ringing in information from afar. This idea of having a godlike overview of conflict — a fused vision of goings-on — predates both computers and AI, according to Geist.

Using computers to synthesize information in real-time goes back decades too. In the 1950s, for instance, the U.S. built the Continental Air Defense Command, which relied on massive machines (then known as computers) for awareness and response. But tests showed that a majority of Soviet bombers would have been able to slip through — often because they could fool the defense system with simple decoys. “It was the low-tech stuff that really stymied it,” said Geist. Some military and intelligence officials have concluded that next-level situational awareness will come with just a bit more technological advancement than they previously thought — although this has not historically proven to be the case. “This intuition that people have is like, ‘Oh, we’ll get all the sensors, we’ll buy a big enough computer and then we’ll know everything,’” he said. “This is never going to happen.”

This type of thinking seems to be percolating once again and might show up in attempts to integrate AI in the near future. But Geist’s research, which he details in his forthcoming book “Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare,” shows that the military will “be lucky to maintain the degree of situational awareness we have today” if they incorporate more AI into observation and analysis in the face of AI-enhanced deception. 

“One of the key aspects of intelligence is reasoning under uncertainty,” he said. “And a conflict is a particularly pernicious form of uncertainty.” An AI-based analysis, no matter how detailed, will only ever be an approximation — and in uncertain conditions there’s no approach that “is guaranteed to get an accurate enough result to be useful.” 

Creative Commons (CC BY 4.0) / NOIRLab/NSF/AURA.

In the movie, with the proclamation that the Soviet missiles are merely simulated, the crisis is temporarily averted. But the wargaming computer, unbeknownst to the authorities, is continuing to play. As it keeps making moves, it displays related information about the conflict on the big screens inside Cheyenne Mountain as if it were real and missiles were headed to the States. 

It is only when the machine’s inventor shows up that the authorities begin to think that maybe this could all be fake. “Those blips are not real missiles,” he says. “They’re phantoms.”

To rebut fake data, the inventor points to something indisputably real: The attack on the screens doesn’t make sense. Such a full-scale wipeout would immediately prompt the U.S. to total retaliation — meaning that the Soviet Union would be almost ensuring its own annihilation. 

Using his own judgment, the general calls off the U.S.’s retaliation. As he does so, the missiles onscreen hit the 2D continents, colliding with the map in circular flashes. But outside, in the real world, all is quiet. It was all a game. “Jesus H. Christ,” says an airman at one base over the comms system. “We’re still here.”

Similar nonsensical alerts have appeared on real-life screens. Once, in the U.S., alerts of incoming missiles came through due to a faulty computer chip. The system that housed the chip sent erroneous missile alerts on multiple occasions. Authorities had reason to suspect the data was likely false. But in two instances, they began to proceed as if the alerts were real. “Even though everyone seemed to realize that it’s an error, they still followed the procedure without seriously questioning what they were getting,” said Pavel Podvig, senior researcher at the United Nations Institute for Disarmament Research and a researcher at Princeton University. 

In Russia, meanwhile, operators did exercise independent thought in a similar scenario, when an erroneous preliminary launch command was sent. “Only one division command post actually went through the procedure and did what they were supposed to do,” he said. “All the rest said, ‘This has got to be an error,’” because it would have been a surprise attack not preceded by increasing tension, as expected. It goes to show, Podvig said, “people may or may not use their judgment.” 

You can imagine in the near future, Podvig continued, nuclear operators might see an AI-generated assessment saying circumstances were dire. In such a situation, there is a need “to instill a certain kind of common sense” he said, and make sure that people don’t just take whatever appears on a screen as gospel. “The basic assumptions about scenarios are important too,” he added. “Like, do you assume that the U.S. or Russia can just launch missiles out of the blue?”

People, for now, will likely continue to exercise judgment about attacks and responses — keeping, as the jargon goes, a “human in the loop.”

The idea of asking AI to make decisions about whether a country will launch nuclear missiles isn’t an appealing option, according to Geist, though it does appear in movies a lot. “Humans jealously guard these prerogatives for themselves,” Geist said. 

“It doesn’t seem like there’s much demand for a Skynet,” he said, referencing another movie, “Terminator,” where an artificial general superintelligence launches a nuclear strike against humanity.

Podvig, an expert in Russian nuclear goings-on, doesn’t see much desire for autonomous nuclear operations in that country. 

“There is a culture of skepticism about all this fancy technological stuff that is sent to the military,” he said. “They like their things kind of simple.” 

Geist agreed. While he admitted that Russia is not totally transparent about its nuclear command and control, he doesn’t see much interest in handing the reins to AI.

China, of course, is generally very interested in AI, and specifically in pursuing artificial general intelligence, a type of AI which can learn to perform intellectual tasks as well as or even better than humans can.

William Hannas, lead analyst at the Center for Security and Emerging Technology at Georgetown University, has used open-source scientific literature to trace developments and strategies in China’s AI arena. One big development is the founding of the Beijing Institute for General Artificial Intelligence, backed by the state and directed by former UCLA professor Song-Chun Zhu, who has received millions of dollars of funding from the Pentagon, including after his return to China. 

Hannas described how China has shown a national interest in “effecting a merger of human and artificial intelligence metaphorically, in the sense of increasing mutual dependence, and literally through brain-inspired AI algorithms and brain-computer interfaces.”

“A true physical merger of intelligence is when you’re actually lashed up with the computing resources to the point where it does really become indistinguishable,” he said. 

That’s relevant to defense discussions because, in China, there’s little separation between regular research and the military. “Technological power is military power,” he said. “The one becomes the other in a very, very short time.” Hannas, though, doesn’t know of any AI applications in China’s nuclear weapons design or delivery. Recently, U.S. President Joe Biden and Chinese President Xi Jinping met and made plans to discuss AI safety and risk, which could lead to an agreement about AI’s use in military and nuclear matters. Also, in August, regulations on generative AI developed by China’s Cyberspace Administration went into effect, making China a first mover in the global race to regulate AI.

It’s likely that the two countries would use AI to help with their vast streams of early-warning data. And just as AI can help with interpretation, countries can also use it to skew that interpretation, to deceive and obfuscate. All three tasks are age-old military tactics — now simply upgraded for a digital, unstable age.

Science fiction convinced us that a Skynet was both a likely option and closer on the horizon than it actually is, said Geist. AI will likely be used in much more banal ways. But the ideas that dominate “WarGames” and “Terminator” have endured for a long time. 

“The reason people keep telling this story is it’s a great premise,” said Geist. “But it’s also the case,” he added, “that there’s effectively no one who thinks of this as a great idea.” 

It’s probably so resonant because people tend to have a black-and-white understanding of innovation. “There’s a lot of people very convinced that technology is either going to save us or doom us,” said Nina Miller, who formerly worked at the Nuclear Threat Initiative and is currently a doctoral student at the Massachusetts Institute of Technology. The notion of an AI-induced doomsday scenario is alive and well in the popular imagination and also has made its mark in public-facing discussions about the AI industry. In May, dozens of tech CEOs signed an open letter declaring that “mitigating the risk of extinction from AI should be a global priority,” without saying much about what exactly that means. 

But even if AI does launch a nuclear weapon someday (or provide false information that leads to an atomic strike), humans still made the decisions that led us there. Humans created the AI systems and made choices about where to use them. 

And, besides, in the case of a hypothetical catastrophe, AI didn’t create the environment that led to a nuclear attack. “Surely the underlying political tension is the problem,” said Miller. And that is thanks to humans and their desire for dominance — or their motivation to deceive. 

Maybe the humans need to learn what the computer did at the end of “WarGames.” “The only winning move,” it concludes, “is not to play.”

The post When deepfakes go nuclear appeared first on Coda Story.

]]>
In India, Big Brother is watching https://www.codastory.com/authoritarian-tech/india-surveillance-modi-democratic-freedoms/ Tue, 21 Nov 2023 09:53:22 +0000 https://www.codastory.com/?p=48360 Apple warned Indian journalists and opposition politicians last month that their phones had likely been hacked by a state-sponsored attacker. Is this more evidence of democratic backsliding?

The post In India, Big Brother is watching appeared first on Coda Story.

]]>
Last month, journalist Anand Mangnale woke to find a disturbing notification from Apple on his mobile phone: “State-sponsored attackers may be targeting your iPhone.” He was one of at least a dozen journalists and Indian opposition politicians who said they had received the same message. “These attackers are likely targeting you individually because of who you are and what you do,” the warning read. “While it’s possible this is a false alarm, please take it seriously.”

Why This Story?

India, the world’s most populous democracy, goes to the polls next year and is likely to reelect Narendra Modi for a third consecutive five-year term. But evidence is mounting that India’s democratic freedoms are in regression.

Mangnale is an editor at the Organized Crime and Corruption Reporting Project, a global non-profit media outlet. In August, he and his co-authors Ravi Nair and NBR Arcadio published a detailed inquiry into labyrinthine offshore investment structures through which the Adani Group — an India-based multibillion-dollar conglomerate with interests in everything from ports, infrastructure and cement to green energy, cooking oil and apples — might have been manipulating its stock price. The documents were shared with both Financial Times and The Guardian, which also published lengthy stories alleging that the Adani Group appeared to be using funds from shell companies in Mauritius to break Indian stock market rules.

Mangnale’s phone was attacked with spyware just hours after reporters had submitted questions to the Adani Group in August for their investigation, according to an OCCRP press release. Mangnale hadn’t sent the questions, but as the regional editor, his name was easy to find on the OCCRP website.

OCCRP stated in a press release that Mangnale’s phone was attacked with spyware just hours after it submitted questions to the Adani Group in August for its report. Mangnale hadn’t sent the questions, but as the regional editor, his name was easy to find on the OCCRP website.

Gautam Adani, the Adani Group’s chairman and the second richest person in India, has been close to Indian Prime Minister Narendra Modi for decades. When Modi was campaigning in the 2014 general elections, which brought him to power with a sweeping majority, he used a jet and two helicopters owned by the Adani Group to crisscross the country. Modi’s perceived bond with Adani as well as with Mukesh Ambani, India’s richest man — all three come from the prosperous western Indian state of Gujarat — has for years given rise to accusations of crony capitalism and suggestions that India now has its own set of Russian-style oligarchs.

The Adani Group’s supposed influence on Modi is a major campaign issue for opposition parties, many of which are coming together in a coalition to take on the ruling Bharatiya Janata Party in the 2024 general election. According to Rahul Gandhi — leader of the opposition Congress party and scion of the Nehru-Gandhi dynasty, which has provided three Indian prime ministers — the Adani Group is so close to power it is practically synonymous with the government. He said Apple’s threat notifications showed that the government was hacking the phones of politicians who sought to expose Adani and his hold over Modi. 

Mahua Moitra, a prominent opposition politician and outspoken critic of Adani, reported that she had also received the warning from Apple to her phone. She posted on X: “Adani and PMO bullies — your fear makes me pity you.” PMO stands for the prime minister’s office.   

Mangnale, referring to the opposition’s allegations, told me that there was only circumstantial evidence to suggest that the Apple notification could be tied to the Indian government. As for his own phone, a forensic analysis commissioned by OCCRP did not indicate which government or government agency was behind the attack, nor did it surface any evidence that the Adani Group was involved. But the timing raised eyebrows, as the Modi government has been accused in the past of using spyware on political opponents, critical journalists, scholars and lawyers. 

In 2019, the messaging service WhatsApp, owned by Meta, filed a lawsuit in a U.S. federal court against the Israel-based NSO Group, developers of a spyware called Pegasus, in which it was revealed that the software had been used to target Indian journalists and activists. A year later, The Pegasus Project, an international journalistic investigation, reported that the phone numbers of at least 300 Indian individuals — Rahul Gandhi among them — had been slated for targeting with the eponymous weapons-grade spyware. And last year, The New York Times reported that Pegasus spyware was included in a $2 billion defense deal that Modi signed in 2017, on the first ever visit made by an Indian prime minister to Israel. In November 2021, Apple sued NSO too, arguing that in a “free society, it is unacceptable to weaponize powerful state-sponsored spyware against those who seek to make the world a better place.” 

What is happening to Mangnale is the most recent iteration of a script that has been playing out for the last nine years. India’s democratic regression is evident in its declining scores in a variety of international indices. In the latest World Press Freedom Index, compiled by Reporters Without Borders, India ranks 161 out of 180 countries, and its score has been declining sharply since 2017. According to RSF, “violence against journalists, the politically partisan media and the concentration of media ownership all demonstrate that press freedom is in crisis.”  

By May next year, India will hold general elections, in which Modi is expected to win a third consecutive five-year term as prime minister and further entrench a Hindu nationalist agenda. Since 2014, as India has become a strategic potential counterweight to runaway Chinese power and influence in the Indo-Pacific region, Modi has reveled in being increasingly visible on the global stage. Abroad, he has brandished India’s credentials as a pluralist democracy. The mounting criticism in the Western media of his authoritarian tendencies and Hindu chauvinism has seemingly had little effect on India’s diplomatic standing. Meanwhile at home, Modi has arguably been using — perhaps misusing — the full authority of the prime minister’s office to stifle opposition critics. 

Indian Prime Minister Narendra Modi and billionaire businessman Gautam Adani (left) have long had a mutually beneficial relationship that critics allege crosses the line into crony capitalism. Vijay Soneji/Mint via Getty Images.

The morning after Apple sent out its warning, there was an outpouring of anger on social media, with leading opposition figures accusing the government of spying. Apple, as a matter of course, says it is “unable to provide information about what causes us to issue threat notifications.” The logic is that such information “may help state-sponsored attackers adapt their behavior to evade detection in the future.” But the lack of information leaves a gap that is then filled by speculation and conspiracies. Apple’s circumspect message, containing within it the possibility that the threat notification might be false altogether, also gives governments plausible deniability.

Right on cue, Ashwini Vaishnaw, India’s minister of information and technology, managed in a single statement to claim that the government was concerned about Apple’s notification and would “get to the bottom of it” while also dismissing surveillance concerns as just bellyaching. “There are many compulsive critics in our country,” Vaishnaw said about the allegations from opposition politicians. “Their only job is to criticize the government.” Lawyer Apar Gupta, founder of the Internet Freedom Foundation, described Vaishnaw’s statements as an attempt to “trivialize or misdirect public attention.”

Finding that his phone had been attacked by spyware was not the only example of Mangnale being targeted after OCCRP published its investigation into the Adani Group’s possibly illegal stock manipulation. In October, the Gujarat police summoned Mangnale and his co-author Ravi Nair to the state capital Ahmedabad to question them about the OCCRP report. Neither journalist lives in the state, which made the police summons, based on a single complaint by an investor in Adani stocks, seem like intimidation. It took the intervention of India’s Supreme Court to grant both journalists temporary protection from arrest.

Before the Supreme Court, the well-known lawyer Indira Jaising had argued that the Gujarat police had no jurisdiction to arbitrarily summon Mangnale and Nair to the state without informing them in what capacity they were being questioned. It seemed, she told the court, like a “prelude to arrest” and thus a violation of their constitutional right to personal liberty. A week later, the Supreme Court made a similar ruling to protect two Financial Times correspondents based in India from arrest. The journalists, in Mumbai and Delhi, had not even written the article based on documents shared by the OCCRP, but were still summoned by police to Gujarat. On December 1, the police are expected to explain to the Supreme Court why they are seemingly so eager to question the reporters.

While the mainstream television news networks in India frequently and loudly debate news topics on air, there is little coverage of the pressure that the Indian government puts on individuals who try to hold the government to account. Ravish Kumar, an esteemed Hindi-language journalist, told me that few people in India were aware of the threat to journalists and opposition voices in Modi’s India. “When people hear allegations made by political figures such as Rahul Gandhi, they can be dismissed as politics rather than fact. There is no serious discussion of surveillance in the press,” he said. 

Kumar once had a substantial platform on NDTV, a respected news network that had built its reputation over decades. In March this year, the Adani Group completed a hostile takeover of NDTV, leading to a series of resignations by the network’s most recognizable anchors and editors, including Kumar. NDTV is now yet another of India’s television news networks owned by corporations that are either openly friendly to the Modi government or unwilling to jeopardize their other businesses by being duly critical. 

Nowadays, Kumar reports for his personal YouTube channel, albeit one with about 7.8 million subscribers. A documentary about his lonely fight to keep reporting from India both accurately and skeptically was screened in cinemas across the U.K. and U.S. in July. 

According to Kumar, journalists and critics are naturally fearful about the Indian government’s punitive measures because some have ended up in prison on the basis of dubious evidence found on their phones and laptops. Most notoriously, a group of reputed academics, writers and human rights activists were accused of inciting riots in 2018 and plotting to assassinate the prime minister. Independent analysts hired by The Washington Post reported that the electronic evidence in the case was likely planted. 

Some of this possibly planted evidence was found on the computer of Stan Swamy, an octogenarian Jesuit priest who was charged with crimes under India’s anti-terror law and died in 2021 as he awaited trial. Swamy suffered from Parkinson’s disease, which can make everyday actions like eating and drinking difficult. While in custody, he was treated so poorly by the authorities that he had to appeal for a month before he was given a straw to make it easier for him to drink.

The threat of arrest hangs like a Damoclean sword above the heads of journalists like Mangnale who dare to ask questions of power and investigate institutional corruption. Despite the interim stay on his arrest, Mangnale still faces further court proceedings and the possibility of interrogation by the Gujarat police. In the words of Drew Sullivan, OCCRP’s publisher: “The police hauling in reporters for vague reasons seems to represent state-sanctioned harassment of journalists and is a direct assault on freedom of expression in the world’s largest democracy.”

The post In India, Big Brother is watching appeared first on Coda Story.

]]>
In Africa’s first ‘safe city,’ surveillance reigns https://www.codastory.com/authoritarian-tech/africa-surveillance-china-magnum/ Wed, 08 Nov 2023 13:33:21 +0000 https://www.codastory.com/?p=48029 Nairobi boasts nearly 2,000 Huawei surveillance cameras citywide. But in the nine years since they were installed, it is hard to see their benefits.

The post In Africa’s first ‘safe city,’ surveillance reigns appeared first on Coda Story.

]]>
Nairobi purchased its massive traffic surveillance system in 2014 as the country was grappling with a terrorism crisis.
Today, the city boasts nearly 2,000 Huawei surveillance cameras citywide, all sending data to the police.
On paper, the system promised the ultimate silver bullet: It put real-time surveillance tools into the hands of more than 9,000 police officers. But do the cameras work?

In Africa’s first ‘safe city,’ surveillance reigns

Lights, cameras, what action? In Nairobi, the question looms large for millions of Kenyans, whose every move is captured by the flash of a CCTV camera at intersections across the capital.

Though government promises of increased safety and better traffic control seem to play on a loop, crime levels here continue to rise. In the 1990s, Nairobi, with its abundant grasslands, forests and rivers, was known as the “Green City in the Sun.” Today, we more often call it “Nairobbery.”

Special series

This is the third in a series of multimedia collaborations on evolving systems of surveillance in medium-sized cities around the world by photographers at Magnum Photos, data geographers at the Edgelands Institute, an organization that explores how the digitalization of urban security is changing the urban social contract, and essayists commissioned by Coda Story.

Our first two essays examined surveillance in Medellín, Colombia and Geneva, Switzerland. Next up: Singapore.

I see it every time I venture into Nairobi’s Central Business District. Navigating downtown Nairobi on foot can feel like an extreme sport. I clutch my handbag, keep my phone tucked away and walk swiftly to dodge “boda boda” (motorbike) riders and hawkers whose claim on pedestrian walks is quasi-authoritarian. Every so often, I’ll hear a woman scream “mwizi!” and then see a thief dart down an alleyway. If not that, it will be a motorist hooting loudly at a traffic stop to alert another driver that their vehicle is being stripped of its parts, right then and there.

Every city street is dotted with cameras. They fire off a blinding flash each time a car drives past. But other than that, they seem to have little effect. I have yet to hear of or witness an incident in which thugs were about to rob someone, looked up, saw the CCTV cameras then stopped and walked away.

Nairobi launched its massive traffic surveillance system in 2014 as the country was grappling with a terrorism crisis. A series of major attacks by al-Shabab militants, including the September 2013 attack at Nairobi’s Westgate shopping complex in which 67 people were killed, left the city reeling and politicians under extreme pressure to implement solutions. A modern, digitized surveillance system became a national security priority. And the Chinese tech hardware giant Huawei was there to provide it. 

A joint contract between Huawei and Kenya’s leading telecom, Safaricom, brought us the Integrated Urban Surveillance System, and we became the site of Huawei’s first “Safe City” project in Africa. Hundreds of cameras were deployed across Nairobi’s Central Business District and major highways, all networked and sending data to Kenya’s National Police Headquarters. Nairobi today boasts nearly 2,000 CCTV cameras citywide.

On paper, the system promised the ultimate silver bullet: It put real-time surveillance tools into the hands of more than 9,000 police officers to support crime prevention, accelerated responses and recovery. Officials say police monitor the Kenyan capital at all times and quickly dispatch first responders in case of an emergency.

But do the cameras work? Nine years since they were installed, it is hard to see the benefits of these electronic eyes that follow us around the city day after day.

Early on, Huawei claimed that from 2014 to 2015, crime had decreased by 46% in areas supported by their technologies, but the company has since scrubbed its website of this report. Kenya’s National Police Service reported a smaller drop in crime rates in 2015 in Nairobi, and an increase in Mombasa, the other major city where Huawei’s cameras were deployed. But by 2017, Nairobi’s reported crime rates surpassed pre-installation levels.

According to a June 2023 report by Coda’s partners at the Edgelands Institute, an organization that studies the digitalization of urban security, there has been a steady rise in criminal activity in Nairobi for nearly a decade.

So why did Nairobi adopt this system in the first place? One straightforward answer: Kenya had a problem, and China offered a solution. The Kenyan authorities had to take action and Huawei had cameras to sell. So they made a deal.

Nairobi’s surveillance apparatus today has become part of the “Digital Silk Road” — China’s quest to wire the world. It is a central component of the Belt and Road Initiative, an ambitious global infrastructure development strategy that has spread China’s economic and political influence across the world. 

This hasn’t been easy for China in the industrialized West, with companies like Huawei battling sanctions by the U.S. and legal obstacles both in the U.K. and European Union countries. But in Africa, the Chinese technology giant has a quasi-monopoly on telecommunications infrastructure and technology deployment. Components from the company make up around 70% of 4G networks across the continent.

Chinese companies also have had a hand in building or renovating nearly 200 government buildings across the continent. They have built secure intra-governmental telecommunications networks and gifted computers to at least 35 African governments, according to research by the Heritage Foundation.

Grace Bomu Mutung’u, a Kenyan scholar of IT policy in Kenya and Africa, currently working with the Open Society Foundations, sees this as part of a race to develop and dominate network infrastructure, and to use this position to gather and capitalize on data that flows through networks.

“The Chinese are way ahead of imperial companies because they are approaching it from a different angle,” she told me. She posits that for China, the Digital Silk Road is meant to set a foundation for an artificial intelligence-based economy that China can control and profit from. Mutung’u derided African governments for being so beholden to development that their leaders keep missing the forest for the trees. “We seem to be caught in this big race. We have yet to define for ourselves what we want from this new economy.”

The failure to define what Africa wants from the data-driven economy and an obsession with basic infrastructure development projects is taking the continent through what feels like another Berlin scramble, Mutung’u told me, referring to the period between the 19th and early 20th centuries that saw European powers increase their stake in Africa from around 10% to about 90%.

“Everybody wants to claim a part of Africa,” she said. “If it wasn’t the Chinese, there would be somebody else trying to take charge of resources.” Mutung’u was alluding to China’s strategy of financing African infrastructure projects in exchange for the continent’s natural resources.

A surveillance camera in one of Nairobi’s matatu buses.

Nairobi was the first city in Africa to deploy Huawei’s Safe City system. Since then, cities in Egypt, Nigeria, South Africa and a dozen other countries across the continent have followed suit. All this has drawn scrutiny from rights groups who see the company as a conduit in the exportation of China’s authoritarian surveillance practices. 

Indeed, Nairobi’s vast web of networked CCTV cameras offers little in the way of transparency or accountability, and experts like Mutung’u say the country doesn’t have sufficient data protection laws in place to prevent the abuse of data moving through surveillance systems. When the surveillance system was put in place in 2014, the country had no data protection laws. Kenya’s Personal Data Protection Act came into force in 2019, but the Office of the Data Protection Commissioner has yet to fully implement and enforce the law.

In a critique of what he described at the time as a “massive new spying system,” human rights lawyer and digital rights expert Ephraim Kenyanito argued that the government and Safaricom would be “operating this powerful new surveillance network effectively without checks and balances.” A few years later, in 2017, Privacy International raised concerns about the risks of capturing and storing all this data without clear policies on how that data should be treated or protected.

There was good reason to worry. In January 2018, an investigation by the French newspaper Le Monde revealed that there had been a data breach at the African Union headquarters in Addis Ababa following a hacking incident. Every night for five years, between 2012 and 2017, data downloaded from AU servers was sent to servers located in China. The Le Monde investigation alleged the involvement of the Chinese government, which denied the accusation. In March 2023, another massive cyber attack at AU headquarters left employees without access to the internet and their work emails for weeks.

The most recent incident brought to the fore growing concerns among local experts and advocacy groups about the surveillance of African leaders as Chinese construction companies continue to win contracts to build sensitive African government offices, and Chinese tech companies continue to supply our telecommunication and surveillance infrastructure. But if these fears have had any effect on agreements between the powers that be, it is not evident.

As the cameras on the streets of Nairobi continue to flash, researchers continue to ponder how, if at all, digital technologies are being used in the approach to security, coexistence and surveillance in the capital city.

The Edgelands Institute report found little evidence linking the adoption of surveillance technology and a decrease in crime in Kenya. It did find that a driving factor in rising crime rates was unemployment. For people under 35, the unemployment rate has almost doubled since 2015 and now hovers at 13.5%.

In a 2022 survey by Kenya’s National Crime Research Centre, a majority of respondents identified community policing as the most effective method of crime reduction. Only 4.2% of respondents identified the use of technology such as CCTV cameras as an effective method.

And the system has meanwhile raised concerns among privacy-conscious members of society regarding potential infringement upon the right to privacy for Kenyans and the technical capabilities of these technologies, including AI facial recognition. The secrecy often surrounding this surveillance, the Edgelands Institute report notes, complicates trust between citizens and the state.

It may be some time yet before the lights and the cameras lead to action.

Photographer Lindokuhle Sobekwa’s portable camera obscura uses a box and a magnifying glass to take images for this story.

The post In Africa’s first ‘safe city,’ surveillance reigns appeared first on Coda Story.

]]>
The smart city where everybody knows your name https://www.codastory.com/authoritarian-tech/kazakhstan-smart-city-surveillance/ Thu, 26 Oct 2023 10:05:13 +0000 https://www.codastory.com/?p=47305 In small-town Kazakhstan, an experiment with the “smart city” model has some residents smiling. But it also signals the start of a new mass surveillance era for the Central Asian nation.

The post The smart city where everybody knows your name appeared first on Coda Story.

]]>
At first glance, Aqkol looks like most other villages in Kazakhstan today: shoddy construction, rusting metal gates and drab apartment blocks recall its Soviet past and lay bare the country’s uncertain economic future. But on the village’s outskirts, on a hill surrounded by pine trees, sits a large gray and white cube: a central nervous system connecting thousands of miles of fiber optic cables, sensors and data terminals that keeps tabs on the daily comings and goings of the village’s 13,000 inhabitants. 

This is the command center of Smart Aqkol, a pilot study in digitized urban infrastructure for Kazakhstan. When I visited, Andrey Kirpichnikov, the deputy director of Smart Aqkol, welcomed me inside. Donning a black Fila tracksuit and sneakers, the middle-aged Aqkol native scanned his face at a console that bore the logo for Hikvision, the Chinese surveillance camera manufacturer. A turnstyle gave a green glow of approval and opened, allowing us to walk through. 

“All of our staff can access the building using their unique face IDs,” Kirpichnikov told me.

He led me into a room with a large monitor displaying a schematic of the village. The data inputs and connected elements that make up Smart Aqkol draw on everything from solar panels and gas meters to GPS trackers on public service vehicles and surveillance cameras, he explained. Analysts at the command center report their findings to the mayor’s office, highlighting data on energy use, school attendance rates and evidence for police investigations. 

“I see a huge future in what we’re doing here,” Kirpichnikov told me, gesturing at a heat map of the village on the big screen. “Our analytics keep improving and they are only going to get better as we expand the number of sensory inputs.”

“We’re trying to make life better, more efficient and safer,” he explained. “Who would be opposed to such a project?”

Much of Aqkol’s housing and infrastructure is from the Soviet-era.

Smart Aqkol presents an experimental vision of Kazakhstan’s economic prospects and its technocratic leadership’s governing ambitions. In January 2019, when then-President Nursultan Nazarbayev spoke at the project’s launch, he waxed about a future in which public officials could use networked municipal systems to run Kazakhstan “like a company.” The smart city model is appealing for leaders of the oil-rich nation, which has struggled to modernize its economy and shed its reputation for rampant government corruption. But analysts I spoke with say it also marks a turn toward Chinese-style public surveillance systems. Amid the war in Ukraine, Kazakhstan’s engagement with China has deepened as a way to hedge against dependence on Russia, its former colonial patron.

Kazakhstan’s smart city initiatives aren’t starting from a digital zero. The country has made strides in digitizing public services, and now ranks second among countries of the former Soviet Union in the United Nations’ e-governance development index. (Estonia is number one.) The capital Astana also has established itself as a regional hub for fintech innovation. 

And it’s not only government officials who want these systems. “There is a lot of domestic demand, not just from the state but also from Kazakhstan’s middle class,” said Erica Marat, a professor at the U.S. National Defense University. There’s an allure about smart city systems, which in China and other Asian cities are thought to have improved living standards and reduced crime.

They also hold some promise of increasing transparency around the work of public officials. “The government hopes that digital platforms can overcome cases of petty corruption,” said Oyuna Baldakova, a technology researcher at King’s College London. This would be a welcome shift for Kazakhstan, which currently ranks 101st out of 180 countries on Transparency International’s Corruption Perceptions Index.

Beyond the town’s main street, many roads remain unpaved in Aqkol.

But the pilot in Aqkol doesn’t quite align with these grander ambitions, at least not yet. Back at the command center, Kirpichnikov described how Aqkol saw a drop in violent crime and alcohol-related offenses after the system’s debut. But in a town of this size, where crime rates rarely exceed single digits, these kinds of shifts don’t say a whole lot. 

As if to better prove the point, the team showed me videos of crime dramatizations that they recorded using the Smart Aqkol surveillance camera system. In the first video, one man lifted another off the ground in what was meant to mimic a violent assault, but looked much more like the iconic scene where Patrick Swayze lifts Jennifer Grey overhead at the end of “Dirty Dancing.” Another featured a man brandishing a Kalashnikov in one hand, while using the other to hold his cellphone to his ear. In each case, brightly colored circles and arrows appeared on the screen, highlighting “evidence” of wrongdoing that the cameras captured, like the lift and the Kalashnikov.

Kirpichnikov then led me into Smart Aqkol’s “situation room,” where 14 analysts sat facing a giant LED screen while they tracked various signals around town. Contrary to the high-stakes energy that one might expect in a smart city situation room, the atmosphere here felt more like that of a local pub, with the analysts trading gossip about neighbors as they watched them walk by on monitors for street-level cameras.

Kirpichnikov explained that residents can connect their gas meters to their bank accounts and set up automatic gas payments. This aspect of Smart Aqkol has been a boon for the village. Residents I spoke with praised the new payment system — for decades, the only option was to stand in line to pay for their bills, an exercise that could easily take half a day’s time.

And there was more. To highlight the benefits of Smart Aqkol’s analytics work, Kirpichnikov told me about recent finding: “We were able to determine that school attendance is lower among children from poorly insulated households.” He pointed to a gradation of purple squares showing variance in heating levels across the village. “We could improve school grades, health and the living standards of residents just by updating our old heating systems,” he said.

Kirpichnikov might be right, but step away from the clean digital interface and any Aqkol resident could tell you that poor insulation is a serious problem in the apartment blocks where most people live, especially in winter when temperatures dip below freezing most nights. Broken windows covered with only a thin sheet of cellophane are a common sight. 

Walking around Aqkol, I was struck by the absence of paved roads and infrastructure beyond the village’s main street. Some street lamps work, but others don’t. And the public Wi-Fi that the village prides itself on offering only appeared to function near government buildings.

Informational signs for free Wi-Fi hang across the village despite the network’s limited reach.

The village also has two so-called warm bus shelters — enclosed spaces with heat lamps to shelter waiting passengers during the harsh Kazakh winters. The stops are supposed to have Wi-Fi, charging ports for phones and single-channel TVs. When I passed by one of the shelters, I met an elderly Aqkol resident named Vera. “All of these things are gone,” she told me, waving her hand at evidence of vandalism. “Now all that’s left is the camera at the back.”

“I don’t know why we need all this nonsense here when we barely have roads and running water,” she added with a sigh. “Technology doesn’t make better people.”

Vera isn’t alone in her critique. Smart Aqkol has brought the village an elaborate overlay of digitization, but it’s plain to see that Aqkol still lags far behind modern Kazakh cities like Astana and Almaty when it comes to basic infrastructure. A local resident named Lyubov Gnativa runs a YouTube channel where she talks about Aqkol’s lack of public services and officials’ failures to address these needs. The local government has filed police reports against Gnativa over the years, accusing her of misleading the public.

And a recent documentary made by Radio Free Europe/Radio Liberty — titled “I Love My Town, But There’s Nothing Smart About It” — corroborates many of Gnativa’s observations and includes interviews with with dozens of locals drawing attention to water issues and the lack of insulation in many of the village’s homes.

But some residents say they are grateful for how the system has contributed to public safety. Surveillance cameras now monitor the village’s main thoroughfare from lampposts, as well as inside public schools, hospitals and municipal buildings.

“These cameras change the way people behave and I think that’s a good thing,” said Kirpichnikov. He told a story about a local woman who was recently harassed on a public bench, noting that this kind of interaction would often escalate in the past. “The woman pointed at the camera and the man looked up, got scared and began to walk away.”

A middle-aged schoolteacher named Irina told me she feels much safer since the project was implemented in 2019. “I have to walk through a public park at night and it can be intimidating because a lot of young men gather there,” she said. “After the cameras were installed they never troubled me again.”

A resident of Aqkol.

The Smart Aqkol project was the result of a deal between Kazakhtelecom, Kazakhstan’s national telecommunications company; the Eurasian Resources Group, a state-backed mining company; and Tengri Lab, a tech startup based in Astana. But the hardware came through an agreement under China’s Digital Silk Road initiative, which seeks to wire the world in a way that tends to reflect China’s priorities when it comes to public infrastructure and social control. Smart Aqkol uses surveillance cameras made by Chinese firms Dahua and Hikvision, which in China have been used — and touted, even — for their ability to track “suspicious” people and groups. Both companies are sanctioned by the U.S. due to their involvement in surveilling and aiding in the repression of ethnic Uyghurs in Xinjiang, an autonomous region in western China.

Critics are wary of these kinds of systems in Kazakhstan, where skepticism of China’s intentions in Central Asia has been growing. The country is home to a large Uyghur diaspora of more than 300,000 people, many of whom have deep ties to Xinjiang, where both ethnic Uyghurs and ethnic Kazakhs have been systematically targeted and placed in “re-education” camps. Protests across Kazakhstan in response to China’s mass internment campaign have forced the government to negotiate the release of thousands of ethnic Kazakhs from China, but state authorities have walked this line carefully, in an effort to continue expanding economic ties with Beijing.

Although Kazakhstan requires people to get state permission if they want to hold a protest — and permission is regularly denied — demonstrations nevertheless have become increasingly common in Kazakhstan since 2018. With Chinese-made surveillance tech in hand, it’s become easier than ever for Kazakh authorities to pinpoint unauthorized concentrations of people. Hikvision announced in December 2022 that its software is used by Chinese police to set up “alarms” that are triggered when cameras detect “unlawful gatherings” in public spaces. The company also has claimed that its cameras can detect ethnic minorities based on their unique facial features.

Much of Aqkol’s digitized infrastructure shows its age.

Marat of U.S. National Defense University noted the broader challenges posed by surveillance tech. “We saw during the Covid-19 pandemic how quickly such tech can be adapted to other purposes such as enforcing lockdowns and tracing people’s whereabouts.”

“Such technology could easily be used against protest leaders too,” she added.

In January 2022, instability triggered by rising energy prices resulted in the government issuing “shoot to kill” orders against protesters — more than 200 people were killed in the ensuing clashes. The human rights news and advocacy outlet Bitter Winter wrote at the time that China had sent a video analytics team to Kazakhstan to use cameras it had supplied to identify and arrest protesters. Anonymous sources in their report alleged that the facial profiles of slain protesters were later compared with the facial data of individuals who appeared in surveillance video footage of riots, in an effort to justify government killings of “terrorists.”

With security forming a central promise of the smart city model, broad public surveillance is all but guaranteed. The head of Tengri Lab, the company leading the development of Smart Aqkol, has said in past interviews that school security was a key motivation behind the company’s decision to spearhead the use of artificial intelligence-powered cameras.

“After the high-profile incident in Kerch, we added the ability to automatically detect weapons,” he said, referencing a mass shooting at a college in Russian-occupied Crimea that left more than 20 people dead in October 2018. In that same speech he made an additional claim: “All video cameras in the city automatically detect massive clusters of people,” a veiled reference to the potential for this technology to be used against protesters.

Soon, there will be more smart city systems across Kazakhstan. Smart Aqkol and Kazakhtelecom have signed memorandums of understanding with Almaty, home to almost 2 million people, and Karaganda, with half a million, to develop similar systems. “The mayor of Karaganda was impressed by our technology and capabilities, but he was mainly interested in the surveillance cameras,” Kirpichnikov told me.

As to the question of whether these systems share data with Chinese officials, “we simply don’t have a clear answer on who has the data and how it is used,” Marat told me. “We can’t say definitively whether China has access but we know its companies are extremely dependent on the Chinese state.”

When I reached out to Tengri Lab to ask whether there are concerns regarding the safety of private data connected to the project, the company declined to comment.

Residents of Aqkol.

What does all this mean for Aqkol? The village is so small that the faces captured on camera are rarely those of strangers. The analysts told me they recognize most of the town’s 13,000 inhabitants between them. I asked whether this makes people uncomfortable, knowing their neighbors are watching them at all times.

Danir, a born-and-raised Aqkol analyst in the situation room, told me he doesn’t believe the platform will be abused. “All my friends and family know I am watching from this room and keeping them safe,” he said. “I don’t think anybody feels threatened — we are their friends, their neighbors.”

“People fear what they don’t understand and people complain about the cameras until they need them,” said Kirpichnikov. “There was a woman once who spoke publicly against the project but after we returned her lost handbag — after we spotted it on a camera — she started to see the benefits of what we are building here.”

After a few years with the system up and running, “it’s normal,” said Danir with a shrug. “Nobody has complained to me.”

For regular people, it doesn’t mean a whole lot. And that may be OK, at least for now. As Irina, the young school teacher whom I met on the village’s main thoroughfare, put it: “I don’t really know what a smart city is, but I like living here. They say we’re safer and my bills are lower than they used to be, and I’m happy.”

The post The smart city where everybody knows your name appeared first on Coda Story.

]]>
When AI doesn’t speak your language https://www.codastory.com/authoritarian-tech/artificial-intelligence-minority-language-censorship/ Fri, 20 Oct 2023 14:07:03 +0000 https://www.codastory.com/?p=47275 Better tech could do a lot of good for minority language speakers — but it could also make them easier to surveil

The post When AI doesn’t speak your language appeared first on Coda Story.

]]>
If you want to send a text message in Mongolian, it can be tough – it’s a script that most software doesn’t recognize. But for some people in Inner Mongolia, an autonomous region in northern China, that’s a good thing.

When authorities in Inner Mongolia announced in 2020 that the language would no longer be the language of instruction in schools, ethnic Mongolians — who make up about 18% of the population — feared the loss of their language, one of the last remaining markers of their distinctive identity. The news and then plans for protest flowed across WeChat, China’s largest messaging service. Parents were soon marching by the thousands in the streets of the local capital, demanding that the decision be reversed.

Why did we write this story?

The AI industry so far is dominated by technology built by and for English speakers. This story asks what the technology looks like for speakers of less common languages, and how that might change in the near term.

With the remarkable exception of the so-called Zero Covid protests of 2022, demonstrations of any size are incredibly rare in China, partially because online surveillance prevents large numbers of people from openly discussing sensitive issues in Mandarin, much less planning public marches. With automated surveillance technologies having a hard time with Mongolian though, protestors had the advantage of being able to coordinate with relative freedom. 

Most of the world’s writing systems have been digitized using centralized standard code (known as Unicode), but the Mongolian script was encoded so sloppily that it is barely usable. Instead, people use a jumble of competing, often incompatible programs when they need to type in Mongolian. WeChat has a Mongolian keyboard, but it’s unwieldy and users often prefer to send each other screenshots of text instead. The constant exchange of images is inconvenient, but it has the unintended benefit of being much more complicated for authorities to monitor and censor.

All but 60 of the world’s roughly 7,000 languages are considered “low-resource” by artificial intelligence researchers. Mongolian belongs to the vast majority of languages barely represented on the internet whose speakers deal with many challenges resulting from the predominance of English on the global internet. As technology improves, automated processes across the internet — from search engines to social media sites — may start to work a lot better for under-resourced languages. This could do a lot of good, giving those language speakers access to all kinds of tools and markets, but it will likely also reduce the degree to which languages like Mongolian fly under the radar of censors. The tradeoff for languages that have historically hovered on the margins of the internet is between safety and convenience on one hand, and freedom from censorship and intrusive eavesdropping on the other.

Back in Inner Mongolia, when parents were posting on WeChat about their plans to protest, it became clear that the app’s algorithms couldn’t make sense of the jpegs of Mongolian cursive, said Soyonbo Borjgin, a local journalist who covered the protests. The images and the long voice messages that protesters would exchange were protected by the Chinese state’s ignorance — there were no AI resources available to monitor them, and overworked police translators had little chance of surveilling all possibly subversive communication. 

China’s efforts to stifle the Mongolian language within its borders have only intensified since the protests. Keen on the technological dimensions of the battle, Borjgin began looking into a machine learning system that was being developed at Inner Mongolia University. The system would allow computers to read images of the Mongolian script, after being fed and trained on digital reams of printed material that had been published when Mongolian still had Chinese state support. While reporting the story, Borjgin was told by the lead researcher that the project had received state money. Borjgin took this as a clear signal: The researchers were getting funding because what they were doing amounted to a state security project. The technology would likely be used to prevent future dissident organizing.

First-graders on the first day of school in Hohhot, Inner Mongolia Autonomous Region of China in August 2023. Liu Wenhua/China News Service/VCG via Getty Images.

Until recently, AI has only worked well for the vanishingly small number of languages with large bodies of texts to train the technology on. Even national languages with hundreds of millions of speakers, like Bangla, have largely remained outside the priorities of tech companies. Last year, though, both Google and Meta announced projects to develop AI for under-resourced languages. But while newer AI models are able to generate some output in a wide set of languages, there’s not much evidence to suggest that it’s high quality. 

Gabriel Nicholas, a research fellow at the Center for Democracy and Technology, explained that once tech companies have established the capacity to process a new language, they have a tendency to congratulate themselves and then move on. A market dominated by “big” languages gives them little incentive to keep investing in improvements. Hellina Nigatu, a computer science PhD student at the University of California, Berkeley, added that low-resource languages face the risk of “constantly trying to catch up” — or even losing speakers — to English.

Researchers also warn that even as the accuracy of machine translation improves, language models miss out on important, culturally specific details that can have real-world consequences. Companies like Meta, which partially rely on AI to review social media posts for things like hate speech and violence, have run into problems when they try to use the technology for under-resourced languages. Because they’ve been trained on just the few texts available, their AI systems too often have an incomplete picture of what words mean and how they’re used.

Arzu Geybulla, an Azerbaijani journalist who specializes in digital censorship, said that one problem with using AI to moderate social media content in under-resourced languages is the “lack of understanding of cultural, historical, political nuances in the way the language is being used on these platforms.” In Azerbaijan, where violence against Armenians is regularly celebrated online, the word “Armenian” itself is often used as a slur to attack dissidents. Because the term is innocuous in most other contexts, it’s easy for AI and even non-specialist human moderators to overlook its use. She also noted that AI used by social media platforms often lumps the Azerbaijani language together with languages spoken in neighboring countries: Azerbaijanis frequently send her screenshots of automated replies in Russian or Turkish to the hate speech reports they’d submitted in Azerbaijani.

But Geybulla believes improving AI for monitoring hate speech and incitement in Azerbaijani will lock in an essentially defective system. “I’m totally against training the algorithm,” she told me. “Content moderation needs to be done by humans in all contexts.” In the hands of an authoritarian government, sophisticated AI for previously neglected languages can become a tool for censorship. 

According to Geybulla, Azerbaijani currently has such “an old school system of surveillance and authoritarianism that I wouldn’t be surprised if they still rely on Soviet methods.” Given the government’s demonstrated willingness to jail people for what they say online and to engage in mass online astroturfing, she believes that improving automated flagging for the Azerbaijani language would only make the repression worse. Instead of strengthening these easily abusable technologies, she argues that companies should invest in human moderators. “If I can identify inauthentic accounts on Facebook, surely someone at Facebook can do that too, and faster than I do,” she said. 

Different languages require different approaches when building AI. Indigenous languages in the Americas, for instance, show forms of complexity that are hard to account for without either large amounts of data — which they currently do not have — or diligent expert supervision. 

One such expert is Michael Running Wolf, founder of the First Languages AI Reality initiative, who says developers underestimate the challenge of American languages. While working as a researcher on Amazon’s Alexa, he began to wonder what was keeping him from building speech recognition for Cheyenne, his mother’s language. Part of the problem, he realized, was computer scientists’ unwillingness to recognize that American languages might present challenges that their algorithms couldn’t understand. “All languages are seen through the lens of English,” he told me.

Running Wolf thinks Anglocentrism is mostly to blame for the neglect that Indigenous languages have faced in the tech world. “The AI field, like any other space, is occupied by people who are set in their ways and unintentionally have a very colonial perspective,” he told me. “It’s not as if we haven’t had the ability to create AI for Indigenous languages until today. It’s just no one cares.” 

American languages were put in this position deliberately. Until well into the 20th century, the U.S. government’s policy position on Indigenous American languages was eradication. From 1860 to 1978, tens of thousands of children were forcibly separated from their parents and kept in boarding schools where speaking their mother tongues brought beatings or worse. Nearly all Indigenous American languages today are at immediate risk of extinction. Running Wolf hopes AI tools like machine translation will make Indigenous languages easier to learn to fluency, making up for the current lack of materials and teachers and reviving the languages as primary means of communication.

His project also relies on training young Indigenous people in machine learning — he’s already held a coding boot camp on the Lakota reservation. If his efforts succeed, he said, “we’ll have Indigenous peoples who are the experts in natural language processing.” Running Wolf said he hopes this will help tribal nations to build up much-needed wealth within the booming tech industry.

The idea of his research allowing automated surveillance of Indigenous languages doesn’t scare Running Wolf so much, he told me. He compared their future online to their current status in the high school basketball games that take place across North and South Dakota. Indigenous teams use Lakota to call plays without their opponents understanding. “And guess what? The non-Indigenous teams are learning Lakota so that they know what the Lakota are doing,” Running Wolf explained. “I think that’s actually a good thing.”

The problem of surveillance, he said, is “a problem of success.” He hopes for a future in which Indigenous computer scientists are “dealing with surveillance risk because the technology’s so prevalent and so many people speak Chickasaw, so many people speak Lakota or Cree, or Ute — there’s so many speakers that the NSA now needs to have the AI so that they can monitor us,” referring to the U.S. National Security Agency, infamous for its snooping on communications at home and abroad.

Not everyone wishes for that future. The Cheyenne Nation, for instance, wants little to do with outsiders, he told me, and isn’t currently interested in using the systems he’s building. “I don’t begrudge that perspective because that’s a perfectly healthy response to decades, generations of exploitation,” he said.

Like Running Wolf, Borjgin believes that in some cases, opening a language up to online surveillance is a sacrifice necessary to keep it alive in the digital era. “I somewhat don’t exist on the internet,” he said. Because their language has such a small online culture, he said, “there’s an identity crisis for Mongols who grew up in the city,” pushing them instead towards Mandarin. 

Despite the intense political repression that some of China’s other ethnic minorities face, Borjgin said, “one thing I envy about Tibetan and Uyghur is once I ask them something they will just google it with their own input system and they can find the result in one second.” Even though he knows that it will be used to stifle dissent, Borjgin still supports improving the digitization of the Mongol script: “If you don’t have the advanced technology, if it only stays to the print books, then the language will be eradicated. I think the tradeoff is okay for me.”

The post When AI doesn’t speak your language appeared first on Coda Story.

]]>
Without space to detain migrants, the UK tags them https://www.codastory.com/authoritarian-tech/uk-gps-tagging-home-office-asylum/ Thu, 21 Sep 2023 14:25:08 +0000 https://www.codastory.com/?p=46581 The Home Office says electronically tracking asylum seekers is a humane alternative to detention. But migrants say it’s damaging their mental health

The post Without space to detain migrants, the UK tags them appeared first on Coda Story.

]]>
The U.K. is presenting asylum seekers with an ultimatum: await deportation and asylum processing in Rwanda, face detention or wear a tracking device. Or leave voluntarily.

As thousands of people continue to arrive in the U.K., the British authorities are scrambling for new ways to monitor and control them. Under the government’s new rules, Britain has a legal duty to detain and deport anyone who arrives on its shores via truck or boat regardless of whether they wish to seek asylum. Passed in July 2023, the Illegal Migration Act has already been described by the United Nations Human Rights Office as “exposing refugees to grave risks in breach of international law.”

More than 20,000 people have come to the U.K. on small boats so far in 2023, and some 175,000 people are already waiting for an asylum decision. But officials say the U.K. does not have the physical space to detain people under the new law. And a public inquiry published this week argued that the U.K. should not detain migrants for more than 28 days. The report found evidence of abusive, degrading and racist treatment of migrants held in a detention center near London’s Gatwick Airport.

With detention centers at capacity and under scrutiny for mistreating migrants, and with the Rwanda scheme facing court challenges, those awaiting deportation or asylum proceedings are increasingly being monitored using technology instead, such as GPS-enabled ankle trackers that allow officials to follow the wearer’s every move. The ankle tracker program, which launched as a pilot in June 2022, was initially scheduled to last 12 months. But this summer, without fanfare, the government quietly uploaded a document to its website with the news that it was continuing the pilot to the end of 2023.

A Home Office spokesperson told me that “the GPS tracking pilot helps to deter absconding.” But absconding rates among migrants coming to the U.K. are low: The Home Office itself reported that they stood at 3% in 2019 and 1% in 2020, in response to a Freedom of Information request filed by the advocacy group Migrants Organize. In other official statements, the Home Office has expressed concern that the Rwanda policy may lead to “an increased risk of absconding and less incentive to comply with any conditions of immigration bail.” So authorities are fitting asylum seekers with GPS tags to ensure they don’t disappear before they can be deported.

Privacy advocates say the policy is invasive, ineffective and detrimental to the mental and physical health of the wearers. 

“Forging ahead, and massively expanding, such a harmful scheme with no evidence to back up its usefulness is simply vindictive,” said Lucie Audibert, a legal officer at the digital rights group Privacy International, which launched a legal challenge against the pilot program last year, arguing there were not adequate safeguards in place to protect people’s basic rights. 

Migrants who have been tagged under the scheme say the experience is dehumanizing. “It feels like an outside prison,” said Sam, a man in his thirties who fled a civil war with his family when he was a small child and has lived in the U.K. ever since. Sam, whose name has been changed, was told by the Home Office at the end of last year that he would need to wear a tag while the government considered whether to deport him after he had served a criminal sentence.

The Home Office has also outsourced the implementation of the GPS tracking system to Capita PLC, a private security company. Capita has been tasked with fitting tags and monitoring the movements and other relevant data collected on each and every person wearing a device. For migrants like Sam, that means dealing with anonymous Capita staff — rather than the government — whenever his tag was being fitted, checked or replaced.

After a month of wearing the tag, Sam felt depression beginning to set in. He was worried about leaving the house, for fear of accidentally bumping the strap. He was afraid that if too many problems arose with the tracker, the Home Office might use it as an excuse to deport him. Another constant anxiety weighed on him too: keeping the device charged. Capita staff told him its battery could last 24 hours. But he soon found out that wasn’t true — and it would lose charge without warning when he was out, vibrating loudly and flashing with a red light.

“Being around people and getting the charger out so you can charge your ankle — it’s so embarrassing,” Sam said. He never told his child that he had been tagged. “I always hid it under tracksuits or jeans,” he said, not wanting to burden his child with the constant physical reminder that he could be deported.

The mental health problems Sam experienced are not unusual for people who have to wear tracking devices. In the U.S., border authorities first deployed ankle monitors in 2014, in response to an influx of migrants from Central America. According to a 2021 study surveying 150 migrants forced to wear the devices, 12% said wearing the tags led to thoughts of suicide, while 40% said they believed they had been psychologically scarred by the experience.

Capita staff regularly showed up at Sam’s home to check on the tag, and they often came at different times than the Home Office told Sam they would come. Sometimes, they would show up without any warning at all. 

Sam remembered an occasion when Capita officers told him that “the system was saying the strap had been tampered with.” The agents examined his ankle and found nothing wrong with the device. This became a routine: The team showed up randomly to tell him there was a problem or that his location wasn’t registering. “It was all these little things that seemed to make out I was doing something wrong. In the end, I realized it wasn’t me, it was the tag that was the problem. I felt harassed,” Sam told me. 

At one point, Sam said he received a letter from the Home Office saying he had breached his bail conditions because he had not been home when the Capita people came calling. According to Home Office documents, breaching bail conditions is a good enough reason for the government to have access to a migrant’s “trail data”: a live inventory of a person’s precise location every minute of the day and night. He’s worried that this tracking data might be used against him as the government deliberates on whether or not to deport him. 

Sam is not alone in dealing with glitches with the tag. In a study of 19 migrants tagged under the British scheme, 15 participants had practical issues with the devices, such as the devices failing or chargers not working. 

When I asked Capita to comment on these findings, the company redirected me to the Home Office, which denied that there were any concerns. “Device issues are rare and service users are provided with a 24-hour helpline to report any problems,” a government spokesperson said. They then added: “Capita’s field and monitoring staff receive safeguarding training and are able to signpost tag wearers to support organizations where appropriate.”

Migration campaigners say contracts like the one Home Office has with Capita serve to line the pockets of big private security companies at the taxpayers’ expense while helping the government push out the message that they’re being tough on immigration.

“Under this government, we have seen a steep rise in the asylum backlog,” said Monish Bhatia, a lecturer in Sociology at the University of York, who studies the effects of GPS tagging. “Instead of directing resources to resolving this backlog,” he told me, “they have come up with rather expensive and wasteful gimmicks.” 

The ankle monitor scheme forms part of Britain’s so-called “hostile environment” policy, introduced more than a decade ago by then-Home Secretary Theresa May, who described it as an effort to “create, here in Britain, a really hostile environment for illegal immigrants.” It has seen the government pour billions of pounds into deterring and detaining migrants — from building a high-tech network of surveillance along the English channel in an attempt to thwart small boat crossings to the 120 million pound ($147 million) deal to deport migrants to Rwanda. 

The Home Office estimates it will have to spend between 3 and 6 billion pounds (between $3.68 and $7.36 billion) on detaining, accommodating and removing migrants over the next two years. But the option to tag people, while cheaper than keeping them locked up, also costs the government significant amounts of money. The U.K. currently has two contracts with security companies for electronically tagging both migrants and those in the criminal justice system. One with G4S, which provides the tag hardware, worth 22 million pounds ($27.5 million) and another with Capita, which runs electronic tagging services for 114 million pounds ($142 million), fitting and troubleshooting the tags.

The Home Office said the GPS tagging scheme would help streamline the asylum process and that it was “determined to break the business model of the criminal people smugglers and prevent people from making dangerous journeys across the Channel.” 

For his part, Sam eventually got his tag removed — he was granted an exception due to the tag’s effects on his mental health. After the tag was gone, he described how he felt like it was still there for weeks. He still put his clothes and shoes on as if the tag was still strapped to his ankle. 

“It took me a while to realize I was actually free from their eyes,” he said. But his status remains uncertain: He is still facing the threat of deportation.

Correction: An earlier version of this article incorrectly stated Monish Bhatia’s affiliation. As of April 2023, he is a lecturer at the University of York, not Birkbeck, University of London.

The post Without space to detain migrants, the UK tags them appeared first on Coda Story.

]]>
For migrants under 24/7 surveillance, the UK feels like ‘an outside prison’ https://www.codastory.com/authoritarian-tech/gps-ankle-tags-uk-migrants-home-office/ Wed, 13 Sep 2023 14:47:54 +0000 https://www.codastory.com/?p=46426 He’s lived in the UK since he was a small child. But the Home Office wants to deport him — and track him wherever he goes

The post For migrants under 24/7 surveillance, the UK feels like ‘an outside prison’ appeared first on Coda Story.

]]>
In June 2022, the U.K. Home Office rolled out a new pilot policy — to track migrants and asylum seekers arriving in Britain with GPS-powered ankle tags. The government argues that ankle tags could be necessary to stop people from absconding or disappearing into the country. Only 1% of asylum seekers absconded in 2020. But that hasn’t stopped the Home Office from expanding the pilot. Sam, whose name we’ve changed to protect his safety, came to the U.K. as a refugee when he was a small child and has lived in Britain ever since. Now in his thirties, he was recently threatened with deportation and was made to wear a GPS ankle tag while his case was in progress. Here is Sam’s story, as told to Coda’s Isobel Cockerell.

I came to the U.K. with my family when I was a young kid, fleeing a civil war. I went to preschool, high school and college here. I’m in my thirties now and have a kid of my own. I don’t know anything about the country I was born in — England is all I know. 

I got my permanent residency when I was little. I remember my dad also started applying for our British citizenship when I was younger but never quite got his head around the bureaucracy. 

When I got older, I got into a lifestyle I shouldn’t have and was arrested and given a criminal sentence and jail time. The funny thing is, just before I was arrested, I had finally saved up enough to start the process of applying for citizenship myself but never got around to it in time.

In the U.K., if you’re not a citizen and you commit a crime, the government has the power to deport you. It doesn’t matter if you’ve lived here all your life. So now, I’m fighting the prospect of being kicked out of the only country I’ve ever known. 

When I finished my sentence, they kept me in prison under immigration powers. When I finally got bail, they said I’d have to wear a GPS-powered ankle tag so that I didn’t disappear. I couldn’t believe it. If I had been a British citizen, when I finished my sentence that would be it, I’d be free. But in the eyes of the government, I was a foreigner, and so the Home Office — immigration — wanted to keep an eye on me at all times. 

My appointments with immigration had a strange quality to them. I could tell from the way we communicated that the officers instinctively knew they were talking to a British person. But the system had told them to treat me like an outsider and to follow the procedures for deporting me. They were like this impenetrable wall, and they treated me like I was nothing because I didn’t have a passport. They tried to play dumb, like they had no idea who I was or that I had been here my whole life, even though I’ve always been in the system.

I tried to explain there was no need to tag me and that I would never abscond. After all, I have a child here who I want to stay with. They decided to tag me anyway.

The day came when they arrived in my holding cell to fit the tag. I was shocked by its bulkiness. I thought to myself, ‘How am I going to cover this up under my jeans?’ I love to train and keep fit, but I couldn’t imagine going to the gym with this thing around my ankle. 

It’s hard to explain what it’s like to wear that thing. When I was first released — after many months inside — it felt amazing to be free, to wake up whenever I wanted and not have to wait for someone to come and open my door.

But gradually, I started to realize I wasn’t really free. And people did come to my door. Not prison guards, but people from a private security company. I later learned that company is called Capita.  When things go wrong with the tag, it’s the Capita people who show up at your home.

The visits were unsettling. I had no idea how much power the Capita people had or whether I was even obliged, legally, to let them in. The employees themselves were a bit clueless. Sometimes I would level with them, and they would admit they had no idea why I was being tagged.

It soon became clear that the technology attached to my ankle was pretty glitchy. One time, they came and told me, ‘The system says the tag had been tampered with.’ They checked my ankle and found nothing wrong. It sent my mind whirring. What had I done to jolt the strap? I suddenly felt anxious to leave the house, in case I knocked it while out somewhere. I began to move through the world more carefully. 

Other times, Capita staff came round to tell me my location had stopped registering. The system wasn’t even functioning, and that frustrated me. 

All these issues seemed to make out like I was the one doing something wrong. But I realize now it was nothing to do with me — the problem was with the tag, and the result was that I felt harassed by these constant unannounced visits by these anonymous Capita employees. 

In theory, the Home Office would call to warn you of Capita’s visits, but often they just showed up at random. They never came when they said they would. Once, I got a letter saying I breached my bail conditions after not being home when they came around. But I’d never been told they were coming in the first place. It was so anxiety-inducing: I was afraid if there were too many problems with the tag, it might be used against me in my deportation case. 

The other nightmare was the charging system. According to the people who fit my tag, the device could last 24 hours between charges. It never did. I’d be out and about or at work, and I’d have to calculate how long I could stay there before I needed to go home and charge. The low battery light would flash red, the device would start loudly vibrating, and I’d panic. Sometimes others would hear the vibration and ask me if it was my phone. Being around people and having to charge up your ankle is so embarrassing. There’s a portable charger, but it’s slow. If you want to charge up quicker, you have to sit down next to a plug outlet for two hours and wait. 

I didn’t want my child to know I’d been tagged or that I was having problems with immigration. I couldn’t bear the thought of trying to explain why I was wearing this thing around my ankle or that I was facing deportation. Whenever we were together I made sure to wear extra-loose jeans. 

I couldn’t think beyond the tag. It was always on my mind, a constant burden. It felt like this physical reminder of all my mistakes in life. I couldn’t focus on my future. I just felt stuck on that day when I was arrested. I had done my time, but the message from the Home Office was clear: There was no rehabilitation, at least not for me. I felt like I was sinking into quicksand, being pulled down into the darkness. 

My world contracted, and my mental health went into freefall. I came to realize I wasn’t really free: I was in an outside prison. The government knew where I was 24/7. Were they really concerned I would abscond, or did they simply want to intrude on my life? 

Eventually, my mental health got so bad I was able to get the tag removed, although I’m still facing deportation.

After the tag was taken off, it took me a while to absorb that I wasn’t being tracked anymore. Even a month later, I still put my jeans on as if I had the tag on. I could still kind of feel it there, around my ankle. I still felt like I was being watched. Of course, tag or no tag, the government always has other ways to monitor you. 

I’ve begun to think more deeply about the country I’ve always called home. This country that says it no longer wants me. The country that wants to watch my every move. I’m fighting all of it to stay with my child, but I sometimes wonder if, in the long term, I even want to be a part of this system, if this is how it treats people.

The post For migrants under 24/7 surveillance, the UK feels like ‘an outside prison’ appeared first on Coda Story.

]]>
Researchers say their AI can detect sexuality. Critics say it’s dangerous https://www.codastory.com/authoritarian-tech/ai-sexuality-recognition-lgbtq/ Thu, 13 Jul 2023 14:41:56 +0000 https://www.codastory.com/?p=45224 Swiss psychiatrists say their AI deep learning model can tell if your brain is gay or straight. AI experts say that’s impossible

The post Researchers say their AI can detect sexuality. Critics say it’s dangerous appeared first on Coda Story.

]]>
Between autonomous police dog robots, facial recognition cameras that let you pay for groceries with your smile and bots that can write Wordsworthian sonnets in the style of Taylor Swift, it is beginning to feel like AI can do just about anything. This week, a new capability has been added to the list: A group of researchers in Switzerland say they’ve developed an AI model that can tell if you’re gay or straight. 

The group has built a deep learning AI model that they say, in their peer-reviewed paper, can detect the sexual orientation of cisgender men. The researchers report that by studying subjects’ electrical brain activity, the model is able to differentiate between homosexual and heterosexual men with an accuracy rate of 83%. 

“This study shows that electrophysiological trait markers of male sexual orientation can be identified using deep learning,” the researchers write, adding that their findings had “the potential to open new avenues for research in the field.”

The authors contend that it “still is of high scientific interest whether there exist biological patterns that differ between persons with different sexual orientations” and that it is “paramount to also search for possible functional differences” between heterosexual and homosexual people. 

Is that so? When the study was posted on Twitter, it drew a strong reaction from researchers and scientists studying AI. Experts on technology and LGBTQ+ rights fundamentally disagreed with the prospect of measuring sexual orientation by studying brain patterns. 

“There is no such thing as brain correlates of homosexuality. This is unscientific,” tweeted Abeba Birhane, a senior fellow in trustworthy AI at Mozilla. “Let people identify their own sexuality.”

“Hard to think of a grosser or more irresponsible application of AI than binary-based ‘who’s the gay?’ machines,” tweeted Rae Walker, who directs the PhD in nursing program at the University of Massachusetts in Amherst and specializes in the use of tech and AI in medicine.

Sasha Costanza-Chock, a tech design theorist and the associate professor at Northeastern University, criticized the fact that in order for the model to work, it had to leave bisexual participants out of the experiment. 

“They excluded the bisexuals because they would break their reductive little binary classification model,” Costanza-Chock tweeted

Sebastian Olbrich, Chief of the Centre for Depression, Anxiety Disorders and Psychotherapy of the University Hospital of Psychiatry Zurich and one of the study’s authors, explained in an email that “scientific research often necessitates limiting complexity in order to establish baselines. We do not claim to have represented all aspects of sexual orientation.” Olrich said any future study should extend the scope of participants. 

“Bisexual and asexual individuals exist but are ‘simplified away’ by the Swiss study in order to make their experimental setup workable,” said Qinlan Shen, a research scientist at software company Oracle Labs’ machine learning research group who was among those criticizing the study. “Who or what is this technology being developed for?” they asked. 

Shen explained that technology claiming to “measure” sexual orientation is often met with suspicion and pushback from people in the LGBTQ+ community who work on machine learning. This type of technology, they said, “can and will be used as a tool of surveillance and repression in places of the world where LGBT+ expression is punished.” 

Shen also disagrees with the idea of trying to find a fully biological basis for sexuality. “I think in general, the prevailing view of sexuality is that it’s an expression of a variety of biological, environmental and social factors, and it’s deeply uncomfortable and unscientific to point to one thing as a cause or indicator,” they said.

This isn’t the first time a machine learning paper has been criticized for trying to detect signs of homosexuality. In 2018, researchers at Stanford tried to use AI to classify people as gay or straight, based on photos taken from a dating website. The researchers claimed their algorithm was able to detect sexual orientation with up to 91% accuracy — a much higher rate than humans were able to achieve. The findings led to an outcry and widespread fears of how the tool could be used to target or discriminate against LGBTQ+ people. Michal Kosinski, the lead author of the Stanford study, later told Quartz that part of the objective was to show how easy it was for even the “lamest” facial recognition algorithm to be trained into also recognizing sexual orientation and potentially used to violate people’s privacy. 

Mathias Wasik, the director of programs at All Out, has been campaigning for years against gender and sexuality recognition technology. All Out’s campaigners say that this kind of technology is built on the mistaken idea that gender or sexual orientation can be identified by a machine. The fear is that it can easily fuel discrimination. 

“AI is fundamentally flawed when it comes to recognizing and categorizing human beings in all their diversity. We see time and again how deep learning applications reinforce outdated stereotypes about gender and sexual orientation because they’re basically a reflection of the real world with all its bias,” Wasik told me. “Where it gets dangerous is when these systems are used by governments or corporations to put people into boxes and subject them to discrimination or persecution.”

The Swiss study was published in June, less than a month after Uganda’s president signed a new, repressive anti-LGBTQ+ law — one of the harshest in the world — that includes the death penalty for “aggravated homosexuality.” In Poland, activists are busy challenging the country’s “LGBTQ-free zones” — regions that have declared themselves hostile to LGBTQ+ rights. And the U.S. Supreme Court just issued a ruling that effectively legalizes certain kinds of discrimination against LGBTQ+ people. Identity-based threats against LGBTQ+ people around the world are clear and present. What’s less clear is whether AI should have any role in mitigating them.

The study’s researchers say that their work could help combat political movements advocating for conversion therapy by showing that sexual orientation is a biological marker.

“Our research is absolutely not intended for use in prosecution or repression — nor would it seem to be a practicable method for such abuse,” said Olbrich. “There is no proof that this method could work in an involuntary setting. It is a sad reality that many technologies can be misused; the ethical responsibility is to prevent misuse, not halt the progress of scientific study.”

He added that the study’s objective was to identify the neurological correlates — not causes — of sexual orientation, in the hope of gaining a more nuanced understanding of human diversity. 

“Our work should be seen as a contribution to the larger quest to comprehend the remarkable workings of our neurons, reflecting our behaviors and consciousness. We didn’t set out to judge sexual orientation, but rather to appreciate its diversity. We regret if people felt uncomfortable with the findings,” he said. 

“However true these good intentions might be,” said Shen, “I don’t think it erases the inherent potential harms of sexual orientation identification technologies.”

On Twitter, Rae Walker, the UMass nursing professor, was more blunt

“Burn it to the ground,” they said.

The post Researchers say their AI can detect sexuality. Critics say it’s dangerous appeared first on Coda Story.

]]>
Israel uses Palestine as a petri dish to test spyware https://www.codastory.com/authoritarian-tech/israel-spyware-palestine-antony-loewenstein/ Thu, 22 Jun 2023 10:41:55 +0000 https://www.codastory.com/?p=44680 Journalist Antony Loewenstein discusses how Israeli surveillance tech is tested in Palestine before being exported across the world

The post Israel uses Palestine as a petri dish to test spyware appeared first on Coda Story.

]]>
Israel is one of the world’s biggest suppliers of surveillance technology. Its defense companies provide spyware to everyone, from autocrats in Saudi Arabia to democrats in the European Union. It is an Israeli company that the widow of Washington Post columnist Jamal Khashoggi is suing for the hacking of her phone in the months leading up to her husband’s murder in the Saudi Arabian embassy in Istanbul. 

While Israeli companies are perhaps the most high-profile purveyors of spyware, several companies headquartered in the United States and in Europe also sell surveillance technology. And persistent regulatory inconsistencies and blindspots suggest that there is still considerable reluctance, globally, to legislate to prevent the misuse of such technology. In Europe, this week, countries including France, Germany and the Netherlands have been arguing for the need to install spyware to surveil journalists if security agencies deem it necessary. 

As governments vacillate over regulation, human rights abuses continue. Last month, Israel was reported to be using facial recognition technology software called Red Wolf to deliberately and exclusively track Palestinians. Journalist Antony Loewenstein was based for several years in East Jerusalem. In his new book, “The Palestine Laboratory,” he explores how Israel has turned Palestine into a testing ground for surveillance tools that Israeli companies then export to governments around the world. I spoke with Loewenstein, who lives in Australia, over the phone.

This conversation has been edited for length and clarity. 

When did the privatization of the Israeli defense industry begin and why was that an important moment?

For the first decade of Israel’s existence after 1948, it was all state run. The Six-Day War [in 1967], when Israel, in six days, took control of the West Bank and Gaza and East Jerusalem, really accelerated the defense industry. By the 1970s, there was a fairly healthy private Israeli arms industry. Some of the companies that had been public before were now private. But it’s important to remember that both in the past, and also now, with organizations like NSO Group, most of these companies are private in name only. They are arms of the state. 

They are used by the state to forward and pursue their diplomatic aims. In the last 10 or so years, Benjamin Netanyahu, the prime minister, and Mossad, the Israeli intelligence agency, have gone around the world to countries that are not friends with Israel and have held out Israeli spyware as a carrot. Basically, Israel is saying, ‘If you are friends with us, if you help us, if you join with us in the U.N. in certain ways, if you don’t criticize us so much, we will sell you this unbelievably effective spyware.’ And since the Russian invasion of Ukraine, there have been huge numbers of European countries and others desperately coming to Israel, wanting defense equipment to protect themselves from any potential Russian attack.

How has Israel’s tech industry changed borders across the world?

Maybe the most prominent example, although not particularly well known, is the Israeli surveillance towers on the U.S.-Mexico border. They were installed a number of years ago, and it doesn’t make much of a difference whether it’s a Democrat or a Republican in the White House. In fact, Biden is accelerating this technological border, so to speak, and the company that America has used is Elbit, which is Israel’s biggest defense company. They have done a lot of work in the West Bank and across the Israel-Gaza border. And the reason the U.S. used Elbit as a contractor was because they liked what Elbit was doing in Palestine. I mean, the company promotes itself as being ‘successful’ in Palestine.

Does this border technology change the willingness of states to commit violent acts?

I don’t think necessarily violence becomes less likely. But I think in some contexts, Israeli surveillance tech, what you see being tested on Palestinians, makes it far easier for regimes to not go down the path of killing people en masse. Instead, they just massively surveil their populations, which allows them to get all the information they potentially need without the need for the bad images, so to speak, of mass violence. However, I also think that with an almost inevitable surge in climate refugees and with global migration at its largest since World War II, a lot of nations will actually revert to extreme violence on their borders.

You can see what the EU has been doing in the last few years with the assistance of Israeli drones, unarmed drones. The EU has made the decision with Frontex, their border — so-called — security, to allow the vast majority of brown or black bodies on boats to drown. That’s a conscious political decision. They don’t feel that way about Ukrainian refugees. And just for the record, I think all people should be welcomed. But the European Union does not see it that way. And the idea that you could possibly in years to come have armed drones hovering over the Mediterranean, firing on boats, shooting boats out of the water, I think is very conceivable.

Does Israel’s defense industry pose a threat to its allies?

It does. To me, the relationship between Israel and the U.S. is like an abusive relationship. On the face of it, very close. I think they love each other. They’re expressing admiration for each other all the time. Without the financial, diplomatic and military support from the U.S., Israel would arguably not exist. And yet, according to the most accurate figure that I could find, every single day the NSA, America’s leading intelligence agency and the biggest intelligence agency in the world, has roughly 400 Hebrew speakers spying on Israel. Spying on their best friend. And rest assured, that works in reverse as well.

They don’t really trust each other. More importantly, in the last few years, the Biden administration has talked about trying to curtail the power of Israeli spyware. A year and a half ago, they sanctioned NSO Group, the company behind Pegasus. A lot of the media was saying, ‘Oh, this is fantastic, the White House is now taking spyware seriously.’ But I think that’s misunderstanding the issue. America doesn’t want competition. They don’t want a real challenge to their dominance in spyware. They’re pissed off that Israeli spyware, which has been sold to dozens and dozens of countries around the world, threatens their hegemony.

You wrote in the book about how the Covid pandemic has been a wake up call for Israelis to how they, too, are vulnerable to surveillance.

For many Israeli Jews, for many years, all the surveillance was happening over there. It was happening to Palestinians in the West Bank and East Jerusalem. Israeli Jews didn’t really feel it themselves. They were being surveilled, but they were either unaware of it or didn’t seem to care. During the pandemic, Israel had lockdowns like a lot of other countries. A lot of Israel’s biggest defense companies — Elbit and NSO Group — pivoted to developing various tools to supposedly fight the pandemic. But it was still mass surveillance, mass monitoring, which they now used within Israel itself. 

For the first time, a lot of Israeli Jews discovered that they themselves were being monitored, that their phones had been hacked. Eventually, the occupation always comes home. Slowly, Israeli Jews are waking up to the reality that what’s happening literally down the road in Palestine will inevitably bleed back into their own world.

The post Israel uses Palestine as a petri dish to test spyware appeared first on Coda Story.

]]>
Digital footprints on the dark side of Geneva https://www.codastory.com/authoritarian-tech/geneva-digital-surveillance/ Thu, 15 Jun 2023 14:59:10 +0000 https://www.codastory.com/?p=43823 Photographer Thomas Dworzak documents digital surveillance of daily life in one of Europe’s wealthiest cities

The post Digital footprints on the dark side of Geneva appeared first on Coda Story.

]]>

Digital footprints on the dark side of Geneva

For this photo essay, Magnum Photos President Thomas Dworzak traveled to Switzerland and documented the lives of Geneva residents along with the digital “footprints” they leave behind every day. Drawing on research by the Edgelands Institute that explored Geneva’s evolving systems of everyday surveillance, Dworzak sought to use photography to tell the story of how the digitalization of our daily lives affects — and diminishes — our security.

Special series

This is the second in a series of multimedia collaborations on evolving systems of surveillance in medium-sized cities around the world by photographers at Magnum Photos, data geographers at the Edgelands Institute, an organization that explores how the digitalization of urban security is changing the urban social contract, and essayists commissioned by Coda Story.

Our first essay examined surveillance on the streets of Medellín, Colombia.

He accompanied Geneva citizens in their daily routines while documenting the digital traces of their activities throughout the day. Dworzak researched the places that store our digital data and photographed them as well — an investigation that proved difficult and revealing of the lack of transparency surrounding the handling and storage of personal data.

To conclude the project, Dworzak sent each of his subjects a postcard from places where their digital information is stored: a simple way to demonstrate the randomness of where our digitally collected information ends up.

Thomas writes: 

Do citizens of Geneva understand how surveillance takes place in their daily lives? The relationship between surveillance and power can be understood as a contemporary version of the “social contract,” originally conceptualized by the Genevan philosopher Jean-Jacques Rousseau in his 18th century seminal work on democracies.

As a photographer, I needed to set the place: Geneva. I wanted to play on the dark side of the quaint, cute and affluent image of one of the world’s wealthiest cities and the world of international relations in which the Genevans are so often entangled.

I needed to trace the connection between life in this comfortable European city and the hidden paths of information that form underneath a surveilled daily life. I spent time with a variety of regular Genevan people, all voluntary participants in our project. I photographed their daily routines, marking whenever they would leave a “digital footprint” when using their phones, credit cards, apps or computers. With the help of the Edgelands team, I then identified corresponding data centers around the world where their information was likely to have been stored. I created a set of postcards using open-source applications like Google Earth and Google Street View. These “postcards from your server” were then sent back to the respective volunteers from the countries where these data centers were located, highlighting the far-flung places that our private data goes to when they perform a simple task such as buying groceries or a bus ticket.

Geneva, December 2022. Davide agreed to let me track his digital footprints. Here, he shows his ticket on a train.
Geneva, January 2023. Postcard from the server. Google Earth screenshot of the location of the server where the digital footprints of Davide may be stored. Although corporate security and privacy policies prevented us from pinpointing its precise location, we were able to get an approximate idea of where individuals’ data was hosted.
Geneva, January 2023. Postcard from the server. A postcard from a server that may hold Davide’s data was sent back to Davide. This postcard was sent from a server administered by CISCO, at Equinix Larchenstrasse 110, 65993 Frankfurt, Germany.
Geneva, January 2023. United Nations Plaza. The broken leg of the “Broken Chair” monument, a public statue in front of the UN Palais des Nations. The statue is a graphic illustration evoking the violence of war and the brutality of land mines. It has become one of the city’s most recognized landmarks.
Geneva, January 2023. Postcard from the server. Google Earth screenshot of the location of the server where the digital footprints of Hushita may be stored. BUMBLE Equinix Schepenbergweg 42, 1105 AT Amsterdam, Netherlands. Hushita is another volunteer who agreed to let me track her digital footprints.
Geneva, December 2022. The European Organization for Nuclear Research, known as CERN, is an intergovernmental organization that operates the largest particle physics laboratory in the world. Established in 1954, it is based in a northwestern suburb of Geneva. CERN is an official United Nations General Assembly observer and is a powerful model for international cooperation. The history of CERN has shown that scientific collaboration can build bridges between nations and contribute to a broader understanding of science among the general public. In 1989, the World Wide Web was invented at CERN by Sir Tim Berners-Lee, a British scientist.
Geneva, December 2022. Surveillance camera shop.
Geneva, January 2023. Postcard from the server. Google Street View. Screenshot of the location of the server where some of the digital footprints of Renata may be stored. Apple Data Center, Viborg, Denmark. Renata is another volunteer who agreed to let me track her digital footprints.
Geneva, November 2022. Proton corporate server in Geneva. ProtonMail is one of the world’s safest encrypted email services. Nicholas is another volunteer who agreed to let me track his digital footprints.
Geneva, November 2023. Renata uses a digital sports watch.
Geneva, ​December ​2022​. ​Digital footprints with Antoine. The bus stop near his flat is named after Jean-Jaques Rousseau’s “Contrat Social.” Antoine is another volunteer who agreed to let me track his digital footprints.
Geneva, December 2022. Jean-Jacques Rousseau Island. The Genevan philosopher’s fundamental work on democracies is based on the notion of a “social contract.” The Edgelands Institute’s Geneva Surveillance Report examines how the relationships between citizens and surveillance leads to a potential new social contract.
Geneva, January 2023. Postcard from the server. A postcard from the potential server location of Antoine’s digital footprint was sent back to him. This postcard was sent from the server location of GOOGLE MAPS Rue de Ghlin 100, 7331 Saint-Ghislain, Belgium.

The post Digital footprints on the dark side of Geneva appeared first on Coda Story.

]]>
Turkey uses journalists to silence critics in exile https://www.codastory.com/authoritarian-tech/turkey-journalists-transnational-repression/ Thu, 08 Jun 2023 13:19:23 +0000 https://www.codastory.com/?p=44180 Using the language of press freedom, Erdogan has weaponized the media to intimidate Turkish dissidents abroad

The post Turkey uses journalists to silence critics in exile appeared first on Coda Story.

]]>
Early in the morning on May 17, the German police raided the homes of two Turkish journalists and took them into custody. Ismail Erel and Cemil Albay — who work for Sabah, a pro-government Turkish daily headquartered in Istanbul — were released after a few hours, but their arrests provoked strong condemnation in Turkey. Turkish President Recep Tayyip Erdogan, in the midst of a tight presidential race, told an interviewer that “what was done in Germany was a violation of the freedom of the press.”

The Big Idea: Shifting Borders

Borders are liminal, notional spaces made more unstable by unparalleled migration, geopolitical ambition and the use of technology to transcend and, conversely, reinforce borders. Perhaps the most urgent contemporary question is how we now imagine and conceptualize boundaries. And, as a result, how we think about community.

In this special issue are stories of postcolonial maps, of dissidents tracked in places of refuge, of migrants whose bodies become the borderline, and of frontier management outsourced by rich countries to much poorer ones.

The European Centre for Media Freedom also came out in support of the Sabah journalists, condemning the detention and demanding that press freedom be upheld. But Turkey itself is a leading jailer of journalists, ranked 165th out of 180 countries in the 2023 World Press Freedom Index published by Reporters Without Borders. And, according to German prosecutors, Erel and Albay were under investigation for the “dangerous” dissemination of other journalists’ personal data.

German authorities have legitimate concerns about the safety of Turkish journalists living in exile. In July 2021, Erk Acarer, a Turkish columnist, was beaten up outside his home in Berlin. Later that month, German authorities began investigating Turkish nationalist organized crime groups operating in Europe after the police found a hit list of 55 journalists and activists who had fled Turkey.

In September 2022, Sabah published information that revealed the location of Cevheri Guven’s home. It appears likely — though it has not been confirmed by German officials — that this was the reason for the arrests of Erel and Albay. Guven himself had been arrested in Turkey in 2015 and sentenced to over 22 years in prison. He was the editor of a news magazine that had published a cover criticizing Erdogan. Out on bail before his trial, Guven wrote that he gave his “life savings” to a smuggler to get him and his family out of Turkey. He now lives in Germany.

The ability of states such as Germany and Sweden to protect refugees, whether they are fleeing Turkey, China, Russia or Iran, has waned, as authoritarian leaders have become more brazen in using technology to stalk, bully, assault, kidnap and even kill dissidents. The Turkish state’s appetite for targeting critical voices abroad, especially those of journalists, has been growing for some time. As Erdogan’s government clamped down on media freedom at home, it has co-opted journalists working at government-friendly news outlets into becoming tools of cross-border repression. This has allowed the state to reach outside Turkey’s borders to intimidate journalists and dissidents who have sought refuge in Western Europe and North America.

Since last year, Sabah has revealed details about the locations of several Turkish journalists in exile. In October 2022, it published the address and photographs of exiled journalist Abdullah Bozkurt. The report included details about where he shopped. This was just a month after I met Bozkurt at a cafe in the Swedish capital, Stockholm, where he now lives. Bozkurt told me that he is constantly harassed online by pro-government trolls and because of the large Turkish immigrant population in Sweden, many of whom are Erdogan supporters, has been forced into isolation. It has had, he said, an adverse impact on his children’s quality of life.

Two years before Bozkurt’s personal information was leaked, in June 2020, Cem Kukuc, a presenter on the Turkish channel TGRT Haber, said of Bozkurt and other critical journalists: “Where they live is known, including their addresses abroad. Let’s see what happens if several of them get exterminated.” Just three months after that broadcast, Bozkurt was attacked in Stockholm by unidentified men who dragged him to the ground and kicked him for several minutes. “I think this attack was targeted,” Bozkurt told the Committee to Protect Journalists, “and is part of an intimidation campaign against exiled Turkish journalists with the clear message that we should stop speaking up against the Turkish government.” Bozkurt deleted his address and vehicle and contact information from the Swedish government’s registration system after the 2020 attack, but both Sabah and A Haber, another pro-government media outlet, still published his address last year.

Sabah and A Haber are both owned by the sprawling Turkuvaz Media Group. It is “one of the monopolistic hubs for pro-government outlets,” said Zeyno Ustun, an assistant professor of sociology and digital media and film at St. Lawrence University in the U.S. The group’s chief executive is Serhat Albayrak, the brother of a former government minister, Berat Albarak, who is also Erdogan’s son-in-law.

Turkuvaz says that its newspapers have a collective readership of 1.6 million. In April, a month before Turkey’s tense general election, in which Erdogan managed to secure his third term as president, Turkuvaz’s channel ATV was the most watched in the country.

A few days before the second round of the presidential election, in late May, I met Orhan Sali, the head of news at the English-language broadcaster A News and the head of the foreign news desk at A Haber. To enter Turkuvaz’s tall, glass-paneled headquarters on the outskirts of Istanbul, I had to pass through three security barriers. An assistant took me to Sali’s spacious office on the third floor. Sali, who was born in Greece, is small with an incongruously graying beard on his round, youthful face. He wore a crisp, white shirt. On a shelf near Sali’s desk sit a couple of awards, including at least one for “independent journalism,” he told me.

In the same breath, Sali also said, “We are pro-Erdogan, we are not hiding it.” He acknowledged that there is a risk in publishing the names of journalists critical of the Turkish government but said it was not unusual. “If you read the British tabloid newspapers,” he told me, “you will find tons of pictures, tons of addresses.” 

This is not entirely accurate, according to Richard Danbury, who teaches journalism at the City University in London. “It is not true,” he told me, “that even tabloids as a matter of course publish people’s addresses and photos of people’s houses, particularly if they have been at risk of being attacked.”

But Sali was unconcerned. He approached a panel of screens covering the wall. Some of these channels, he said, are hardline and totally supportive of Kemal Kilicdaroglu, the main opposition candidate in Turkey’s recent election. “All of them,” he told me, “are terrorists.”

In the lead up to the presidential election, Turkuvaz outlets such as A News and A Haber gave Kilicdaroglu little to no coverage. Erdogan, meanwhile, received extensive coverage, according to Reporters Without Borders. One pro-government channel, TRT Haber, gave Erdogan 32 hours of airtime compared to just 30 minutes for Kilicdaroglu.

Sali, who seems to have a penchant for deflecting criticism of Turkuvaz’s journalism by comparing it to that of the British press, told me he sees no problem with this lack of balance. “The BBC,” he said, “is supporting the ruler. Who is the ruler? The king. You cannot say anything against the king, can you?”

At least seven journalists who have had their addresses published by Turkuvaz outlets are alleged by Erdogan’s government to be followers of the Islamic cleric Fetullah Gulen, who is suspected of having orchestrated a failed coup against Erdogan in 2016. Since the coup attempt, Erdogan’s government has imprisoned hundreds of critics they refer to as “FETO terrorists,” a derogatory reference to Gulen supporters. Cevheri Guven — the editor whose address in Germany was published in Sabah in September 2022 — is often described in pro-government media as the Joseph Goebbels of FETO, a reference to the Nazi propagandist.

“The 2016 coup had a major effect on the media landscape in Turkey,” said Joseph Fitsanakis, a professor of intelligence and security studies at Coastal Carolina University. “At that point,” he told me, “Erdogan made a conscious decision, a consistent effort to pretty much wipe out any non-AKP voices from the mainstream media landscape.” The AKP, or the Justice and Development Party, was co-founded by Erdogan in 2001.

In October 2022, the Turkish parliament passed sweeping legislation curtailing free speech, including implementing a vaguely worded law that effectively leaves anyone accused of spreading false information about Turkey’s domestic and foreign security facing three years in prison.

Before Erdogan’s rise to power, Turkey did not enjoy total media freedom, said Ustun, the media professor at St. Lawrence University. But, she told me, during his 21 years in politics, “there has been a gradual demise of the media freedom landscape.” Following the widespread protests in 2013, referred to as the Gezi Park protests, and the 2016 coup attempt, “efforts to control the mainstream media as well as the internet have intensified,” she added. The overwhelming majority of mainstream media outlets are now under the control of Erdogan and his allies.

Henri Barkey, a professor at Lehigh University and an adjunct senior fellow at the Council for Foreign Relations, told me that Erdogan has “muscled the press financially” by channeling advertising revenues to pro-government outlets such as those owned by the Turkuvaz Media Group. Erdogan, Barky says, has also weaponized the law. “They use the judicial system to punish the opposition press for whatever reason,” he told me. “You look left and you were meant to look right, and in Turkey today that is enough.”

The media has, for years now, been used as a tool of transnational repression, says Fitsanakis. In 2020, for instance, the U.K. expelled three Chinese spies who had been posing as journalists. But, Fitsanakis adds, since Russia invaded Ukraine in February 2022, intelligence services in Europe and North America, fueled by a heightened awareness of the threat emanating from Moscow, have been collaborating more closely to remove Russian spies from within their borders. 

The actions of other diplomatic missions too are being more closely monitored. Turkey, one of the most prolific perpetrators of transnational repression, according to Freedom House, has found itself a target of Western surveillance, making it harder for the state to place intelligence operatives inside embassies. In lieu of this traditional avenue for embedding intelligence sources in foreign countries, Fitsanakis believes, governments are turning in greater numbers toward friendly journalists. “It’s the perfect cover,” Fitsanakis told me. “You have access to influential people, and you get to ask a lot of questions without seeming strange.”

Erdogan’s re-election, experts fear, could mean he will further clamp down on democratic freedoms. Barkey believes there will be a brain drain as more intellectuals and critics leave Turkey for more congenial shores. But the evidence suggests that an emboldened Erdogan can still reach them.

“We might see a lot more emphasis on silencing any kind of opposition to Erdogan in the coming years,” Fitsanakis told me. “And because much of the opposition to Erdogan is now coming from Turks abroad, that fight is going to transfer to European soil.”

The post Turkey uses journalists to silence critics in exile appeared first on Coda Story.

]]>
When your body becomes the border https://www.codastory.com/authoritarian-tech/us-immigration-surveillance/ Wed, 07 Jun 2023 13:30:38 +0000 https://www.codastory.com/?p=44047 Surveillance technology has brought U.S. immigration enforcement away from the border itself and onto the bodies of people seeking to cross it

The post When your body becomes the border appeared first on Coda Story.

]]>

When your body becomes the border

By the time Kat set foot in the safe house in Reynosa, she had already escaped death’s grip twice.

The first time was in her native Honduras. A criminal gang had gone after Kat’s grandfather and killed him. Then they came for her cousin. Fearful that she would be next, Kat decided she needed to get out of the country. She and her 6-year-old son left Honduras and began the trek north to the United States, where she hoped they could find a safer life.

It was January 2023 when the two made it to the Mexican border city of Reynosa. They were exhausted but alive, free from the shadow of the fatal threats bearing down on their family in Honduras.

The Big Idea: Shifting Borders

Borders are liminal, notional spaces made more unstable by unparalleled migration, geopolitical ambition and the use of technology to transcend and, conversely, reinforce borders. Perhaps the most urgent contemporary question is how we now imagine and conceptualize boundaries. And, as a result, how we think about community.

In this special issue are stories of postcolonial maps, of dissidents tracked in places of refuge, of migrants whose bodies become the borderline, and of frontier management outsourced by rich countries to much poorer ones.

But within weeks of their arrival, a cartel active in the area kidnapped Kat and her son. This is not uncommon in Reynosa, one of Mexico’s most violent cities, where criminal groups routinely abduct vulnerable migrants like Kat so they can extort their relatives for cash. Priscilla Orta, a lawyer who worked on Kat’s case and shared her story with me, explained that newly-arrived migrants along the border have a “look.” “Like you don’t know where you are” is how she put it. Criminals regularly prey upon these dazed newcomers.

When Kat’s kidnappers found out that she had no relatives in the U.S. that they could shake down for cash, the cartel held her and her son captive for weeks. Kat was sexually assaulted multiple times during that period. 

“From what we understand, the cartel was willing to kill her but basically took pity because of her son,” Orta told me. The kidnappers finally threw them out and ordered them to leave the area. Eventually, the two found their way to a shelter in Reynosa, where they were connected with Orta and her colleagues, who help asylum seekers through the nonprofit legal aid organization Lawyers for Good Government. Orta’s team wanted to get Kat and her son into the U.S. as quickly as possible so they could apply for asylum from inside the country. It was too risky for them to stay in Reynosa, vulnerable and exposed.

For more than a month, Kat tried, and failed, to get across the border using the pathway offered to asylum seekers by the U.S. government. She was blocked by a wall — but not the kind we have come to expect in the polarized era of American border politics. The barrier blocking Kat’s entry to the U.S. was no more visible from Reynosa than it was from any other port of entry. It was a digital wall.

Kat’s arrival at the border coincided with a new policy implemented by the Biden administration that requires migrants to officially request asylum appointments at the border using a smartphone app called CBP One. For weeks, Kat tried to schedule a meeting with an asylum officer on the app, as the U.S. government required, but she couldn’t do it. Every time she tried to book an appointment, the app would freeze, log her out or crash. By the time she got back into CBP One and tried again, the limited number of daily appointment slots were all filled up. Orta and her team relayed the urgency of Kat’s case to border officials at the nearest port of entry, telling them that Kat had been kidnapped and sexually assaulted and was alone in Reynosa with her child. The officers told them they needed to use CBP One. 

“It was absolutely stunning,” Orta recalled. “What we learned was that they want everybody, regardless of what’s happening, to go through an app that doesn’t work.”

And so Kat and her son waited in Reynosa, thwarted by the government’s impenetrable digital wall.

The CBP One app is intended to be used for scheduling an appointment with immigration services.

The southern​​ border of the U.S. is home to an expansive matrix of surveillance towers, drones, cameras and sensors. But this digital monitoring regime stretches far beyond the physical border. Under a program known as “Alternatives to Detention,” U.S. immigration authorities use mobile apps and so-called “smart technologies” to monitor migrants and asylum seekers who are awaiting their immigration hearings in the U.S., instead of confining them in immigrant detention centers. And now there’s CBP One, an error-prone smartphone app that people who flee life-threatening violence must contend with if they want a chance at finding physical safety in the U.S.

These tools are a cornerstone of U.S. President Joe Biden’s approach to immigration. Instead of strengthening the border wall that served as the rhetorical centerpiece of former President Donald Trump’s presidential run, the Biden administration has invested in technology to get the job done, championing high-tech tools that officials say bring more humanity and efficiency to immigration enforcement than their physical counterparts — walls and jail cells.

But with technology taking the place of physical barriers and border patrol officers, people crossing into the U.S. are subjected to surveillance well beyond the border’s physical range. Migrants encounter the U.S. government’s border controls before they even arrive at the threshold between the U.S. and Mexico. The border comes to them as they wait in Mexican cities to submit their facial recognition data to the U.S. government through CBP One. It then follows them after they cross over. Across the U.S., immigration authorities track them through Alternatives to Detention’s suite of electronic monitoring tools — GPS-enabled ankle monitors, voice recognition technology and a mobile app called SmartLINK that uses facial recognition software and geolocation for check-ins.

Once in the U.S., migrants enrolled in Alternatives to Detention’s e-monitoring program say they still feel enveloped by the carceral state: They may be out in the world and free to walk down the street, but immigration authorities are ever-present through this web of monitoring technologies.

The program’s surveillance tools create a “temporal experience of indefinite detention,” said Carolina Sanchez Boe, an anthropologist and sociologist at Aarhus University in Denmark, who has spent years interviewing migrants in the U.S. living under Alternatives to Detention’s monitoring regime.

“If you’re in a detention center, the walls are sort of outside of you, and you can fight against them,” she explained. But for those under electronic surveillance, the walls of a detention center reproduce themselves through technology that is heavily intertwined with migrants’ physical bodies. Immigration authorities are ever-present in the form of a bulky monitoring device strapped to one’s ankle or a smartphone app that demands you take a selfie and upload it at a certain time of day. People enrolled in Alternatives to Detention must keep these technologies charged and fully functioning in order to check in with their supervisors. For some, this dynamic transfers the role of an immigration officer onto migrants themselves. Migrants become a subject of state-sanctioned surveillance — as well as their own enforcers of it.

One person enrolled in Alternatives to Detention told Sanchez Boe that the program’s electronic monitoring tools moved the bars of a prison cell inside his head. “They become their own border guard, their own jailer,” Sanchez Boe explained. “When you’re on monitoring, there’s this really odd shift in the way you experience a border,” she added. “It’s like you yourself are upholding it.”

As the U.S. government transposes immigration enforcement to technology, it is causing the border to seep into the most intimate spheres of migrants’ lives. It has imprinted itself onto their bodies and minds.

The app that Kat spent weeks agonizing over is poised to play an increasingly important role in the lives of asylum seekers on America’s southern border. 

Most asylum requests have been on hold since 2020 under Title 42, a public health emergency policy that authorized U.S. officials to turn away most asylum seekers at the border due to the Covid-19 pandemic. In January 2023, the same month that Kat arrived in Reynosa, the Biden administration implemented a new system for vulnerable migrants seeking humanitarian exemptions from Title 42. The government directed people like Kat to use CBP One to schedule their asylum appointments with border officials before crossing into the U.S. 

But CBP One wasn’t built for this at all — it debuted in 2020 as a tool for scheduling cargo inspections, for companies and people bringing goods across the border. The decision to use it for asylum seekers was a techno-optimistic hack intended to reduce the messy realities at the border in the late stages of the pandemic.

But what started out as a quick fix has now become the primary entry point into America’s asylum system. When Title 42 expired last month, officials announced a new policy: Migrants on the Mexico side of the border hoping to apply for asylum must now make their appointments through CBP One. This new system has effectively oriented the first — and for many, the most urgent — stage of the asylum process around a smartphone app.

The government’s CBP One policy means that migrants must have a smartphone, a stable internet connection and the digital skills to actually download the app and make the appointment. Applicants must also be literate and able to read in English, Spanish or Haitian Creole, the only languages the app offers.  

The government’s decision to make CBP One a mandatory part of the process has changed the nature of the country’s asylum system by placing significant technological barriers between some of the world’s most vulnerable people and the prospect of physical safety.

Organizations like Amnesty International argue that requiring asylum seekers to use CBP One violates the very principle upon which the U.S. asylum laws were established: ensuring that people eligible for protection are not turned away from the country and sent back to their deaths. Under U.S. law, people who present themselves to immigration authorities on U.S. soil have a legal right to ask for asylum before being deported. But with CPB One standing in their way, they must first get an appointment before they can cross over to U.S. soil and make their case.

Adding a mandatory app to this process, Amnesty says, “is a clear violation of international human rights law.” The organization argues that the U.S. is failing to uphold its obligations to people who may be eligible for asylum but are unable to apply because they do not have a smartphone or cannot speak one of the three languages available on the app. 

And that’s nothing to say of the technology itself, which migrants and human rights groups working along the border say is almost irredeemably flawed. Among its issues are a facial matching algorithm that has trouble identifying darker skin tones and a glitchy interface that routinely freezes and crashes when people try to log in. For people like Kat, it is nearly impossible to secure one of the limited number of appointments that the government makes available each day. 

CBP One success stories are few and far between. Orta recalled a man who dropped to the ground and let out a shriek when he made an appointment. A group of migrants embraced him as he wept. “That’s how rare it is,” she said. “People fall to their knees and hold each other and cry because no one has ever gotten an appointment before.”

The week after Title 42 ended, I checked in with Orta. In the lead-up to the program’s expiration, the Biden administration announced that immigration officials would make 1,000 appointments available on CBP One each day and would lengthen the window of time for asylum seekers to try to book them. But Orta said the changes did not resolve the app’s structural flaws. CBP One was still crashing and freezing when people tried to log in. Moreover, the number of appointments immigration authorities offer daily — 1,000 across the southern border — is not  nearly enough to accommodate the demand triggered by the expiration of Title 42. 

“It’s still a lottery,” she sighed. “There’s nowhere in the app to say, ‘Hey, I have been sexually abused, please put me first.’ It’s just your name.”

Back in the spring, as Kat struggled with the app day after day, Orta and her colleague decided to begin documenting her attempts. She shared one of those videos with me, taken in early March. Kat — slight, in a black T-shirt — sat in a chair in Reynosa, fidgeting as she waited for CBP One’s appointment-scheduling window to go live. When it did, she let out a nervous sigh, opened the app and clicked on a button to schedule a meeting. The app processed the request for several seconds and then sent her to a new page telling her she didn’t get an appointment. When Kat clicked the schedule button again, her app screen froze. She tried again and again, but nothing worked. She repeated some version of this process every day for a week, while her attorneys filmed. But it was no use — she never succeeded. “It was impossible for her,” Orta said.

Kat is far from the only asylum seeker who has documented CBP One’s shortcomings like this. Scores of asylum seekers attempting to secure an appointment have shared their struggles with the technology in Apple’s App Store. Imagine the most frustrating smartphone issue you’ve ever encountered and then add running for your life to the mix. In the App Store, CBP One’s page features dozens of desperate reviews and pleas for technological assistance from migrants stranded in Mexico.

“This is just torture,” one person wrote. “My girlfriend has been trying to take her picture and scan her passport for 48 hours straight out of desperation. She is hiding in a town where she has no family out of fear. Please help!” Another shared: “If I could give negative stars I would. My family are trying to flee violence in their country and this app and the photo section are all that’s standing in the way. This is ridiculous and devastating.” 

The app, someone else commented, “infringes on human rights. A person in this situation loses to a mechanical machine!”

In Kat’s case, her lawyers tried other routes. They enlisted an academic who studies cartels’ treatment of women along the border to submit an expert declaration in her case. Finally, after more than six weeks of trying and failing to secure an appointment, Kat was granted an exception and allowed to enter the U.S. to pursue her asylum claim without scheduling an appointment on CBP One. Kat and her son are now safely inside the country and staying with a family friend. 

Kat was fortunate to have a lawyer like Orta working on her case. But most people aren’t so lucky. For them, it will be CBP One that determines their fates.

Biden administration officials claim that the tools behind their digitized immigration enforcement strategy are more humane, economical and effective than their physical counterparts. But critics say that they are just jail cells and walls in digital form.

Cynthia Galaz, a policy expert with the immigrant rights group Freedom for Immigrants, told me that U.S. Immigrations and Customs Enforcement, which oversees Alternatives to Detention, “is taking a very intentional turn to technology to optimize the tracking of communities. It’s really seen as a way to be more humane. But it’s not a solution.” 

Galaz argues that the government’s high-tech enforcement strategy violates the privacy rights of hundreds of thousands of migrants and their broader communities while also damaging their mental health. “The inhumanity of the system remains,” she said.

Alternatives to Detention launched in 2004 but has seen exponential growth under the Biden administration. There are now more than 250,000 migrants enrolled in the digital surveillance system, a jump from fewer than 90,000 people enrolled when Biden took office in January 2021. According to ICE statistics, the vast majority of them are being monitored through SmartLINK, the mobile phone app that people are required to download and use for periodic check-ins with the immigration agency. Migrants enrolled in this system face a long road to a life without surveillance, spending an average of 446 days in the program.

During check-ins, migrants enrolled in the program must upload a photo of themselves, which is then matched to an existing picture taken during their program enrollment using facial recognition software. The app also captures the GPS data of participants during check-ins to confirm their location.

The government’s increasing reliance on SmartLINK has shifted the geography of its embodied surveillance program from the ankle to the face. The widespread use of this facial recognition app is expanding the boundaries of ICE’s digital monitoring system, this time from a wearable device to something that is less visible but ever-more ubiquitous.

Proponents at the Department of Homeland Security say that placing migrants under electronic monitoring is preferable to putting them in detention centers as they pursue their immigration cases in court. But digitization raises a whole new set of concerns. Alongside the psychological effects of technical monitoring regimes, privacy experts have expressed concern about how authorities handle and store the data that these systems collect about migrants.

SmartLINK collects wide swaths of data from participants during their check-ins, including location data, photos and videos taken through the app, audio files and voice samples. An FAQ on ICE’s website says the agency only collects participants’ GPS tracking data during the time of their check-ins, but also acknowledges that it has the technical ability to gather location data in real-time from participants who are given an agency-issued smartphone to use for the program — a key concern for migrants enrolled in the program and privacy experts. The agency also acknowledges that it has access to enrollees’ historical location data, which it could theoretically use to determine where a participant lives, works and socializes. Finally, privacy experts worry that the data collected by the agency through the program could be stored and shared with other databases operated by the U.S. Department of Homeland Security, which oversees ICE — a risk the agency recently conceded in its first-ever analysis of the program.

Hannah Lucal, a technology fellow with the immigrant rights legal firm Just Futures Law, which focuses on the intersection of immigration and technology, has studied the privacy risks of Alternatives to Detention at length. She told me she sees the program’s wide-ranging surveillance as “part of a broader agenda by the state to control immigrant communities and to limit people’s autonomy over their futures and their own bodies.”

And the program’s continuous electronic monitoring has left some migrants with physical and psychological damage. The ankle monitors, Lucal said, “cause trauma for people even after they’ve been removed. They give people headaches and sores on their legs. It can be really difficult to bathe, it can be really difficult to walk, and there’s a tremendous stigma around them.” Meanwhile, migrants using SmartLINK have expressed to Lucal fears of being constantly watched and listened to. 

“People talked about having nightmares and losing sleep over just the anxiety that this technology, which is super glitchy, may be used to justify further punishment,” she explained. “People are really living with this constant fear that the technology is going to be used by ICE to retaliate against them.”

Alberto was busy at work when he missed two calls from his Alternatives to Detention supervisor. The 27-year-old asylum seeker had been under ICE’s e-monitoring system since he arrived in the U.S. in 2019. He was first given an ankle monitor but eventually transitioned over to the agency’s mobile check-in app, SmartLINK. Once a week, Alberto was required to send a photo of himself and his GPS location to the person overseeing his case. On those days, Alberto, who works with heavy and loud machinery, would stay home from his job to ensure everything went smoothly.

But one day this past spring, Alberto’s supervisor called him before his normal check-in time, while he was still at work. He didn’t hear the first two calls over the buzz of the room’s machinery. When things quieted down enough for Alberto to see another call coming in, he picked up. Fuming, Alberto’s supervisor ordered him to come to the program’s office the following day. 

“I told her, ‘Ma’am I have to work, I have three kids, I have to support them,’” he told me in Spanish. 

“That doesn’t matter to me,” the case worker replied. 

When Alberto showed up the next day, as instructed, he was told by his Alternatives to Detention supervisor that he had more than a dozen violations for missing calls and appointments — which he disputes — and he was placed on the ankle monitor once again. 

The monitor is bulky and uncomfortable, Alberto explained. In the summer heat, when shorts are in season, Alberto worries that people who catch a glimpse of the device will think he’s a criminal.

U.S. immigration authorities use GPS-enabled ankle monitors to track the movements of migrants enrolled in the Alternatives to Detention program.
Loren Elliot / AFP via Getty Images.

“People look at you when they see it,” he said, “they think that we’re bad.” The situation has worn on him. “It’s ugly to wear the monitor,” he told me. And it weighs even more heavily on him now that he is not sure when it will come off.

Over the past year, I’ve interviewed dozens of people with extensive knowledge of Alternatives to Detention, including immigration attorneys, researchers, scholars and migrants who are, or were, enrolled in the program. Those discussions, as well as an emerging body of research, suggest that Alberto’s reaction to the electronic monitoring he was exposed to is not uncommon. 

In 2021, the Cardozo School of Law published the most comprehensive study on the program’s effects on participants’ well-being, surveying roughly 150 migrants who wear ankle monitors. Ninety percent of people told researchers that the device harmed their mental and physical health, causing inflammation, anxiety, pain, electric shocks, sleep deprivation and depression. Twelve percent of respondents said the ankle monitor resulted in thoughts of suicide, and 40% told researchers they believed that exposure to the device left them with life-long psychological scars.

Berto Hernandez, who had to wear an ankle monitor for nearly two years, described the device as “torturous.” “Besides the damage they do to your ankles, to your skin, there’s this other implication of the damage it does to your mental health,” Hernandez said.

Hernandez, who uses they/them pronouns, immigrated with their parents to the U.S. from Mexico at age 10. In 2019, when they were 30 years old, they were detained by immigration officers and enrolled in Alternatives to Detention as their deportation case proceeded.

Hernandez was in college while they had to wear the monitor and told me a story about a time they drove to a student retreat with a peer a few hours away from their home in Los Angeles. All of a sudden, the ankle monitor started beeping loudly — an automatic response when it exits the geographic range determined by immigration authorities. 

“I had a full panic attack,” Hernandez told me. “I started crying.” Although they had alerted their case manager that they would be out of town, Hernandez says their supervisor must have forgotten to adjust their location radius. After the incident, Hernandez had a physical reaction every time the device made noise.

“Whenever the monitor beeped, I would get full on panic attacks,” they explained. “Shaking, crying. I was fearful that they were going to come for me.” Hernandez thinks the level of fear and lack of control is part of the program’s objectives. “They want you to feel surveilled, watched, afraid,” they said. “They want to exert power over you.”

Hernandez was finally taken off of the ankle monitor in 2021, after appealing to their case manager about bruises the device left on their ankles. Hernandez was briefly allowed to do check-ins by phone but will soon be placed on SmartLINK. They don’t buy the government’s message that these technologies are more humane than incarceration.

“This is just another form of detention,” they told me. “These Alternatives to Detention exert the same power dynamics, the same violence. They actually perpetrate them even more. Because now you’re on the outside. You have semi-freedom, but you can’t really do anything. If you have an invisible fence around you, are you really free?”

Once on SmartLINK, Hernandez will join the 12,700-plus immigrants in the Los Angeles area who are monitored through the facial recognition app. Harlingen, Texas, has more than double that amount, with more than 30,600 placed under electronic monitoring — more than anywhere else in the country. This effectively creates pockets of surveillance in cities and neighborhoods where significant numbers of migrants are being watched through ICE’s e-monitoring program, once again extending the geography of the border beyond its physical range. 

“The implication of that is you never really arrive and you never really leave the border,” Austin Kocher, a Syracuse University researcher focusing on U.S. immigration enforcement who has studied the evolving geography of the border, told me. Kocher says these highly concentrated areas of migrant surveillance are known as “digital enclaves”: places where technology creates boundaries that are often invisible to the naked eye but hyperpresent to those who are subjected to the technology’s demands. 

“It’s not like the borders are like the racial impacts of building freeways through our cities, and things like that,” he noted. “They’re kind of invisible borders.”

Administering all of this technology is expensive. The program’s three monitoring devices cost ICE $224,481 daily to operate, according to agency data.

On that end, there is one clear beneficiary to these expansions. B.I. Incorporated, which started out as a cattle-tracking company before pivoting to prison technology, is the government’s only Alternatives to Detention contractor. It currently operates the program’s technology and manages the system through a $2.2 billion contract with ICE, which is slated to expire in 2025. B.I. is a subsidiary of the GEO Group, a private prison company that operates more than a dozen for-profit immigrant detention centers nationwide on behalf of ICE. GEO Group earned nearly 30% of its total revenue from ICE detention contracts in 2019 and 2020, according to an analysis by the American Civil Liberties Union. Critics like Jacinta Gonzalez, an organizer with the immigrant’s rights group Mijente, say this entire system is corrupted by profit motives — a money-making scheme for the companies managing the detention system that sets up financial incentives to put people behind physical and digital bars.

And B.I. may soon add another option to its toolkit. In April, ICE officials announced that they are pilot testing a facial recognition smartwatch to potentially fold into the e-monitoring system — an admission that came just weeks after the agency released its first-ever analysis of the program’s privacy risks. In ICE’s announcement of the smartwatch rollout, the agency said the device is similar to a consumer smartwatch but less “obtrusive” than other monitoring systems for migrants placed on them. 

Austin Kocher, the immigration enforcement researcher, said that touting technologies like the smartwatch and the phone app as “more efficient” and less invasive than previous incarnations, like the ankle monitors, is tantamount to “techwashing” — a narrative tactic to gain support and limit criticism for whatever shiny new tech tool the authorities roll out.

“With every new technology, they move the yardstick and say, ‘Oh, this is justified because ankle monitors aren’t so great after all,’” Kocher remarked. For people like Kocher, following the process can feel like an endless loop. First, the government detained migrants. Then it began to release them with ankle monitors, arguing that surveillance was kinder than imprisonment. Then it swapped the monitors for facial recognition, arguing that a smartphone is kinder than a bulky ankle bracelet. Each time, the people in charge say that the current system is more humane than what it had in place last time. But it’s hard to know where, or how, it will ever end — and who else will be dragged into the government’s surveillance web in the meantime.

For people like Alberto, there is no clear end in sight. He doesn’t know when the monitor will come off. But he knows it won’t be removed until his supervisor gives the okay. It can’t malfunction if he wants to avoid getting in trouble again. And he can see his daughter is paying attention. 

Recently, she noticed the monitor and asked him what it was. Alberto tried to keep it light. “It’s a watch,” he told her, “but I wear it on my ankle.” She asked him if she could have one too. 

“No,” he replied. “This one is only for adults.”

The post When your body becomes the border appeared first on Coda Story.

]]>
Escaping China with a spoon and a rusty nail https://www.codastory.com/authoritarian-tech/uyghur-thailand-escape-xinjiang-jail/ Mon, 05 Jun 2023 12:57:17 +0000 https://www.codastory.com/?p=44030 How one Uyghur man fled Xinjiang via the notorious smugglers' road and broke out of a Thai prison

The post Escaping China with a spoon and a rusty nail appeared first on Coda Story.

]]>
On April 24, a 40-year-old Uyghur man was reported to have died in a detention center in Thailand. Just a couple of months earlier, in February, another Uyghur man in his forties died in the same center, where about 50 Uyghurs are currently held awaiting possible deportation to China. Over 200 Uyghurs were detained in Thailand in 2014, and about a hundred were estimated to have been deported to China where their lives were under threat. Activists and human rights groups in Germany and several U.S. cities recently protested outside Thai consulates, demanding the release of Uyghurs still held in detention centers.

Hundreds of Uyghurs fled China in 2014, as the Chinese authorities launched a crackdown on the Muslim-majority ethnic group native to the northwest region of Xinjiang. The aim, the government said, was to stamp out extremism and separatist movements in the region. The authorities called it the “strike hard campaign against violent terrorism” and created a program of repression to closely monitor, surveil and control the Uyghur population.

The authorities bulldozed mosques, saw any expression of religion as extremist and confiscated Qurans. By 2018, as many as one million Uyghurs had been sent to so-called “re-education” camps. Across the region, an extensive high-tech system of surveillance was rolled out to monitor every movement of the Uyghur population. This remains the case to this day, with the Chinese police in Urumqi, the capital of Xinjiang, reportedly requiring residents to download a mobile app which enables them to monitor phones. 

Back in 2014, Uyghurs seeking to flee the burgeoning crackdown were forced to take a notoriously dangerous route, known as the “smugglers’ road,” through Vietnam, Cambodia and Thailand into Malaysia — from there, they could reach Turkey. Though Malaysia had previously deported some Uyghur Muslims to China, in 2018, a Malaysian court released 11 Uyghurs on human rights grounds and allowed them safe passage to Turkey. By September 2020, despite Chinese anger, Malaysia declared it would not extradite Uyghurs seeking refuge in a third country. 

But before they could make it to Malaysia, many Uyghurs were detained by the immigration authorities in Thailand and returned to China. Human rights groups condemned the deportations, saying that Uyghurs returned to China “disappear into a black hole” and face persecution and torture upon their return. 

Hashim Mohammed, 26, was 16 when he left China. He spent three years in detention in Thailand before making a dramatic escape. He now lives in Turkey — but thoughts of his fellow inmates, who remain in Thai detention, are with him every day. This is his account of how he made it out of China through the smugglers’ road. 

Hashim’s Story 

On New Year’s Day, in 2019, I was released from immigration detention in Istanbul. It was late evening — around 10 p.m. It was the first time I had walked free in five years. And it was the end of my long journey from China’s Uyghur region, which I ran away from in 2014. 

It started back in the city of Urumqi in Xinjiang, 10 years ago now. I was 16 years old and had recently begun boxing at my local gym. In the evenings, I started to spend some time reciting and reading the Quran. The local Chinese authorities were beginning their mass crackdown on Uyghurs in the name of combating terrorist activity. Any display of religious devotion was deemed suspicious. 

The local police considered my boxing gym to be a sinister and dangerous place. They kept asking us what we were training for. They thought we were planning something. They started arresting some of the students and coaches at the gym. Police visited my house and went through all my possessions. They couldn’t find anything.

After some time, the gym closed — like lots of similar gyms all over the Uyghur region. People around me were being arrested, seemingly for no good reason. I realized I couldn’t live the way I wanted in my hometown, so I decided to leave. 

At that time, thousands of Uyghurs were doing the same thing. I had heard of a smugglers’ route out of China, through Cambodia, Vietnam, Thailand and eventually to Malaysia. From there, I’d be able to fly to Turkey and start a new life. We called it the “illegal way.” It’s very quick once you leave China, it only takes seven days to get to Malaysia. 

At the border leaving China, we met with the smugglers who would get us out. They stuffed around 12 of us into a regular car, all of us sitting on top of each other. I was traveling alone, I didn’t know anyone else in the car. 

I remember one guy, Muhammad, who I met in the car for the first time. He was from the same area as me. He was with his wife and two kids and seemed friendly. 

The road was terrifying. There was a pit of anxiety in my stomach as the smugglers drove through the mountainous jungle at night at breakneck speed. I watched the speedometer needle always hovering above 100 kmph (about 60 mph), and I couldn’t help thinking about how many people were in the car. We heard about another group, crossing the border into Cambodia in a boat, who nearly drowned. After just seven days, we reached Thailand and the border with Malaysia. We sat in the jungle, trying to decide what to do — we could try climbing the border fence. 

But we also saw a rumor on WhatsApp that if you handed yourself in to the Thai border police, they would let you cross the border to Malaysia and fly onward to Turkey within 15 days. People on the app were saying some Uyghurs had already managed it. At this point, we’d been sleeping outside, in the jungle, for days, and we believed it. We handed ourselves in, and the police took a group of us to a local immigration detention center in the Thai jungle. 

Fifteen days slipped by, and we began to realize that we’d made a terrible mistake. With every day that passed, our hope that we would get to Turkey slipped away a little further. No one came to help us. We were worried that the Thai authorities would send us back to China.  

I was put in a dark cell with 12 guys — all Uyghurs like me, all trying to escape China. Throughout our time in jail, we lived under the constant threat of being deported back to China. We were terrified of that prospect. We tried many times to escape.

I never imagined that I would stay there for three years and eight months, from the ages of 16 to 19. I used to dream about what life would be like if I was free. I thought about simply walking down the street and could hardly imagine it. 

There were no windows in the cell, just a little vent at the very top of the room. We used to take turns climbing up, using a rope made out of plastic bags, just to look through the vent. Through the grill, we could see that Thailand was very beautiful. It was so lush. We had never seen such a beautiful, green place. Day and night, we climbed up the rope to peer out through the vent. 

We knew that the detention center we were in was very close to the Thai border. One guy who I shared the cell with figured out something about the place we were in. The walls, he said, in this building built for the heat were actually very thin.

We managed to get hold of two tools. A spoon and an old nail. 

We began, painstakingly, to gouge a hole in the wall of the bathroom block. We took turns. Day and night, we had a rota and quietly scraped away at the wall, making a hole just big enough for a man to fit through. There was a camera in the cell, and the guards checked on us frequently. But they didn’t check the bathroom — and the camera couldn’t see into the bathroom area, either. 

We all got calluses and cuts on our hands from using these flimsy tools to try to dig through the wall. We each pulled 30-minute shifts. To the guards watching the cameras, it looked like we were just taking showers. 

The guys in the cell next door to ours were working on a hole of their own. We planned to coordinate our breakout at the same time, at 2 a.m. on a Sunday. 

We dug through as much of the wall as we could, without breaking through to the other side until the last moment. There was just a thin layer of plaster between us and the outside world. We drew numbers to decide who would be the first to climb out. Out of 12 people, I drew the number four. A good number, all things considered. My friend Muhammad, who I met on the journey to Thailand, pulled number nine. Not so good.

That Sunday, we all pretended to go to sleep. With the guards checking on us every few hours, we lay there with our eyes shut and our minds racing, thinking about what we were about to do.

Two a.m. rolled around. Quietly, carefully, we removed the last piece of the wall, pulling it inward without a noise. The first, second and third man slipped through the hole, jumped down and ran out of the compound. Then it was my turn. I clambered through the hole, jumped over the barbed wire below me and ran.

The guys in the next cell had not prepared things as well as us. They still had a thick layer of cement to break through. They ripped the basin off the bathroom wall and used it to smash through the last layer. It made an awful sound. The guards came running. Six more guys got out after me, but two didn’t make it. One of them was Muhammad. 

The detention center we were in wasn’t very high security. The gate into the complex had been left unlocked. We sprinted out of it, barefoot, in just our shorts and t-shirts, and ran into the jungle on the other side of the road, where we all scattered. 

I hid out for eight days in the jungle as the guards and the local police tracked us through the trees. I had saved some food from my prison rations and drank the water that dripped off the leaves in the humidity.

It’s impossible to move through the undergrowth without making a lot of noise — so when the police got close, we had to just stay dead still and hope they wouldn’t find us. At one point, we were completely surrounded by the police and could hear their voices and their dogs barking and see their flashlights through the trees. It was terrifying.

Finally, after days of walking and hiding in the undergrowth, we made it to Thailand’s border with Malaysia. It’s a tall fence, topped with barbed wire. I managed to climb it and jump over — but the guy I was with couldn’t make it. He was later caught and sent back to detention.

In total, there were 20 of us who had managed to break out of the Thai jail. Eleven made it to Malaysia. The others were caught and are still in the detention center in Thailand. 

After spending another year in detention in Malaysia, I was finally able to leave for Turkey. After two months in Turkish immigration detention, I walked free. I had spent my best years — from the age of 16 until 21 — in a cell. I feel such sorrow when I think of the others who didn’t make it. It’s a helpless feeling, knowing they’re still in there, living under the threat of being sent back to China. 

Now I have a good life in Istanbul. Every morning, I go to the boxing gym. I’d like to get married and start my own family here. But half of me lives in my home region, and my dream is to one day go back to my home country.

Muhammad, my friend who I met on the smuggler’s road, is still in the Thai jail. He’s such an open and friendly person, and he was like my older brother inside. When the hope drained out of me and I broke down, he always reassured me and tried to calm me down. He would tell me stories about the history of Islam and the history of the Uyghur people. I’ll always be grateful to him for that. I think about him, and the other Uyghurs still trapped in Thailand, all the time.

The post Escaping China with a spoon and a rusty nail appeared first on Coda Story.

]]>
Chatbots of the dead https://www.codastory.com/authoritarian-tech/chatbots-of-the-dead/ Mon, 22 May 2023 14:49:10 +0000 https://www.codastory.com/?p=43527 AI grief chatbots can help us talk to loved ones from beyond the grave. Are we okay with that?

The post Chatbots of the dead appeared first on Coda Story.

]]>
Take everything someone has ever written — every text message, email, journal entry, blog post — and feed it into a chatbot. Imagine that after that person dies, they could then continue to talk to you in their own voice, forever. It’s a concept called “chatbots of the dead.” In 2021, Microsoft obtained a program that would do exactly that: train a chatbot to emulate the speech of a dead friend or family member. 

“Yes, it’s disturbing,” admitted Tim O’Brien, Microsoft’s general manager of AI programs, when news of the patent hit the headlines. For some, the notion of talking to a loved one from beyond the grave elicited feelings of revulsion and fear, something that philosophy researchers Joel Krueger and Lucy Osler call “the ick factor.” In October 2022, the U.K. researchers, based at Exeter and Cardiff universities respectively, published “Communing with the Dead Online,” a research paper that looks at the role that chatbots could play in the grieving process. Since then, the capabilities of artificial intelligence large language models have snowballed — and their influence on our lives. Krueger and Osler say we should consider how chatbots might help us in our darkest days by continuing our relationship with loved ones after they’ve died. 

This conversation has been edited for length and clarity. 

What role could a chatbot potentially play after a person has died?

Lucy: Sadly, Joe’s dad died while we were researching this, which added a very different texture to the writing experience. It changed a lot of the conversations we were having around it.

Joel: I started thinking more carefully about some of the ways I wanted not just to preserve his memory but to create more active, and maybe dynamic, ways of maintaining his presence in my life. I started thinking about what role chatbots and more sophisticated technologies might play  in maintaining a continuing bond with him. 

For what it’s worth, I’m still undecided. I’m not sure I’d want a chatbot of my father. But I started thinking more about this issue in that very real context, as I was negotiating my own grief. 

Tell me about the ‘ick factor’ — this response that I’m even having right now, thinking about talking to a family member via a chatbot from beyond the grave.

Lucy: If someone turns around and says, ‘Did you know that we can now create a chatbot of the dead that impersonates someone’s style of voice?’ A very common reaction is: ‘gross,’ ‘ew,’ ‘that’s really scary.’ There’s that kind of knee-jerk reaction. But we think that there might be interesting and complicated things to unpack there. People have this instinctive ick factor when it comes to conversing with the dead. There’s an old Chinese ritual, where there would be a paid impersonator of the dead person at a funeral who would play the role of the deceased, and I think lots of Western ears find that kind of startling and a bit strange. Historically, we recognize that. But because something’s unfamiliar is not a reason to say well, that’s got no worth at all. Grieving practices come in all shapes and forms. 

Do you think talking with a chatbot, after someone has died, would interrupt the natural grieving process? Or the stages of grief like denial, bargaining and acceptance?

Lucy: Using a chatbot of the dead isn’t about denying someone has died. It’s about readjusting to a world where you’re very aware that they have died, without letting go of various habits of intimacy. You don’t have to just move on in a very stark sense. We can have a kind of nuanced and ongoing adjustment to someone’s death and take time to emotionally adjust to the absence we now feel, as we learn to inhabit the world without them.

Joel: We’ve always employed various technologies to find ways to maintain a connection with the dead, and this is just one new form of these technologies. There are lots of ways of getting stuck, and certainly, we can get trapped in those patterns of not accepting the loss. For instance, someone could wake up each day, go through the same pictures, watch the same videos, scroll the same Facebook page. It’s unclear to me whether there’s any greater threat when it comes to chatbots. Chatbots do provide a much richer form of reciprocity, a kind of back-and-forth in which the person may feel more present than if we’re just looking at a picture of them. 

Yes — and there are now AI programs that allow you to talk and interact with a video or hologram version of the person that has died. 

Joel: Yes! Since our research came out late last year, the world has already moved on so much. And some of the grief technology now already seems worlds ahead of a chatbot that’s confined to some little textbox on a screen or a phone. 

Lucy: If you think about the “Be Right Back” episode of “Black Mirror,” it has some interesting implications for what the near future might look like. But I think we should be able to say that a chatbot and a living robot replica of a dead partner are different things. 

What are things you worry about with tech companies offering these so-called ‘chatbots of the dead’? 

Lucy: I am much more concerned, for instance, about data being sold from these programs. Or about these things being created as to be deliberately addictive. 

Joel: Or targeted advertising used on them, when you’re grieving. Imagine if you had a chatbot of your dead father, let’s say, that you could activate anytime you want. You might say, ‘Dad, I’m feeling kind of low today. I really miss you.’ And he says, ‘I’m really sorry to hear that sweetheart. Why don’t you go get the new frappuccino at Starbucks for lunch, and that will help elevate your mood?’

Funnily enough, that’s something my dad probably would say. 

Joel: You can imagine those kinds of targeted ads being built into the technology or very subtle, algorithmically calibrated ways to kind of keep you engaged and potentially keeping you stuck in the grief process as a way of driving user engagement. 

I think our concern is more about the people who are designing the chatbots than it is about the individuals who are using them. The real focus needs to be on issues of transparency, privacy and regulation. The motivations that people have for designing this sort of tech should be as a tool, as a continuing bond, instead of something that they want you to come back to again and again and again. And I realize that sounds a bit hopelessly naive when you’re talking about companies that are driven first and foremost by driving profit.

The post Chatbots of the dead appeared first on Coda Story.

]]>
How Somali workers in the US are fighting Amazon’s surveillance machine https://www.codastory.com/authoritarian-tech/amazon-workers-surveillance/ Wed, 17 May 2023 13:36:26 +0000 https://www.codastory.com/?p=43437 Minnesota just passed a labor bill that could force Amazon to respect the rights of warehouse workers

The post How Somali workers in the US are fighting Amazon’s surveillance machine appeared first on Coda Story.

]]>
Amazon’s unbelievably quick turnaround times on deliveries have become a given for many people in the U.S. Order a bottle of mouthwash on a weekday morning and your breath will be minty fresh within a day or even just a few hours. 

But the success of the e-commerce giant’s rapid-fire delivery model depends on what happens inside Amazon’s “fulfillment centers” — sprawling warehouses where workers sort and pack orders for shipment, all under the watchful eye of technical systems that track their every move. For years, workers have said that the company’s algorithmically-driven approach pushes them to the brink, treating them “like robots” in the service of meeting unattainable productivity quotes and driving up injuries in the process.

On May 16, lawmakers in Minnesota passed a pioneering workplace safety bill that could improve labor conditions for Amazon employees subjected to the company’s worker tracking system. Organizers behind the legislation say it will provide the strongest labor protections in the nation for people working in warehouses like Amazon’s.

Work in Amazon warehouses is overseen entirely by technology: Algorithms track workers’ speed and productivity, measure the so-called “time off task” that employees spend logged out of their workstations and alert managers when workers don’t meet their productivity quotas. Mohamed Farah, a 50-year-old Amazon employee who came to the U.S. in the mid-1990s as a refugee from Somalia, works a 10-hour shift packaging items for shipment at a Minnesota warehouse. He said the company’s grueling pace of work and “time off task” rules have worn on workers’ bodies and minds, including his own. “They say you have to pack a minimum of 80 boxes per hour, but you cannot do it,” he told me. “If you try to pack 80 per hour, you cannot go to the bathroom. If you go to the bathroom, the rate is down.” 

Amazon’s “time off task” measurement is a constant source of worry for many workers. The company tracks the time employees are gone from their workstations. If you spend too much time away from your station, you get in trouble. Internal company documents obtained by VICE revealed that Amazon can fire employees who accumulate 30 minutes of “time off task” on three separate days over the course of a year. The documents also showed that the company tracks the amount of time employees spend in the bathroom. Some employees have described needing to urinate in bottles while working to avoid penalties for using the bathroom.

Farah, who has worked for Amazon for seven years, said that workers get hurt trying to keep pace with packaging quotas. He has come home with three injuries over the last few years. “You go home feeling very tired. Headache, muscle aches, leg aches,” he told me. “They want us to work like robots.”

Farah’s experience is common across the company. A recent survey of more than 2,000 Amazon workers across eight countries found that the company’s performance monitoring and tracking system has taken a physical and emotional toll on employees’ well-being: 57% of respondents said their mental health suffered due to the company’s productivity monitoring, and 51% claimed it harmed their physical health. Amazon has twice the injury rate of comparable warehouses in the U.S., according to a recent analysis of injury data submitted to federal safety regulators.

“You do a lot of bending and back-and-forth walking for hours. You get thirsty, and you go to the bathroom, and it’s on a different floor,” explained Qali Jama, a 39-year-old Amazon warehouse worker who also hails from Somalia. “And then you go to the bathroom, which only has two toilets. If those toilets are occupied, you need to wait to go to the bathroom. The whole time you’re gone your time accumulates, it adds up. And next thing you know, the manager goes up to you and tells you a couple of days later, ‘You have time off task.’”

It was these conditions that fueled a raft of organizing efforts in Amazon facilities across the U.S., including the nation’s first successful union drive at a company warehouse in New York last year. Workers from East Africa, like Qali, were among the first Amazon employees in the nation to confront the company over its labor practices and have been at the helm of organizing efforts in Minnesota. 

The Minnesota bill, which state lawmakers passed on May 16, will not just apply to Amazon — though lawmakers who supported the legislation said it was spurred by reports of injuries at Amazon and a lack of transparency around the company’s productivity quotas. The policy will require any warehouse with more than 250 employees to provide employees with the quotas and work speed metrics used to evaluate workers’ performance. The law also requires for this information to be communicated in employees’ preferred language. Organizers say this will force Amazon to be transparent with its employees about the company’s often opaque workplace productivity metrics — a system they claim increases injuries among workers. The legislation also prohibits employers from imposing quotas on workers that prevent them from taking bathroom, food, rest and prayer breaks.

The law is the product of years of organizing by East African migrant workers, many of whom came to Minnesota as refugees escaping Somalia’s civil war in the 1990s and formed what became the largest Somali diaspora community in the United States. Somali workers now make up large numbers of Amazon’s labor force at warehouses. They were the first in the country to take on Amazon’s labor practices in 2018, when a cohort of workers in the Minneapolis area staged a walkout over working conditions at local warehouses, forcing the company to the negotiating table. 

“Before anyone in the labor movement, we took on Amazon,” said Abdirahman Muse, the executive director of the Awood Center, a Minnesota-based nonprofit that advocates for East African workers’ rights. “And everybody thought we were crazy. But we were not.” Muse compared the Minnesota Somali workers’ trajectory to “the story of David and Goliath” — a small group of refugees and immigrants “facing one of the biggest companies in the world.”

It was the realities of working inside Amazon’s warehouses that prompted Qali to begin organizing for better labor conditions last year. “When I started working there, I said, ‘This is not right what they are doing,’” she recalled. “I always felt like we were slaves there. I always fought against them, I knew my rights. I felt that there were people who have only been in the United States for 30 days. They need money. They come from hunger. They will take anything. And I think that’s what Amazon depends on.” 

“I want people when they come to America to know that they are still human,” she added. “This country does stand for what they believe, but you have to speak — and act.”

The post How Somali workers in the US are fighting Amazon’s surveillance machine appeared first on Coda Story.

]]>
Immigrating to the US? ICE wants your biometrics https://www.codastory.com/authoritarian-tech/us-ice-alternatives-to-detention/ Mon, 01 May 2023 13:48:36 +0000 https://www.codastory.com/?p=43065 From ankle monitors to smart watches, the Biden administration has overseen a boom in tech-driven immigrant surveillance. Two new documents shed light on the program’s scope and practices

The post Immigrating to the US? ICE wants your biometrics appeared first on Coda Story.

]]>
The U.S. Immigration and Customs Enforcement agency is testing a shiny new tool for its digital surveillance arsenal. It is a GPS-enabled wristwatch with facial recognition capabilities that will make it easier — officials say — for migrants awaiting immigration hearings to check in with the agency.

From ankle monitors to smartphone apps to the new Fitbit-esque smartwatch, the Biden administration has overseen a dramatic expansion of the technological toolbox used to surveil immigrants awaiting their hearings in the U.S. White House officials say these measures, all part of ICE’s Alternatives to Detention program, are more humane than traditional detention. But critics argue that the system reproduces the dynamics of incarceration with a technocratic spin, compromising the privacy rights of hundreds of thousands of immigrants and asylum seekers while leaving them with lasting psychological damage.

The program has also left migrants and human rights advocates with lots of questions about what exactly the government does with the substantial amounts of data that it collects. In a curious turn of events, less than a month before ICE announced its plans to pilot test the smartwatch, it unveiled its first-ever analysis of privacy risks that the Alternatives to Detention program carries.

All U.S. federal agencies are required by law to assess the potential privacy impacts of any technology they plan to use before actually deploying the software or tool. Although ICE first rolled out its electronic monitoring program in 2004, it didn’t get around to publishing an assessment of the program’s privacy-related risks until just last month.

Nearly two decades overdue, the assessment alludes to — but doesn’t answer — a number of key questions about the technologies that the agency uses to monitor immigrants to the U.S. 

Critics say the document does little to address the privacy civil liberties and human rights concerns they have long raised with the agency about the program. Instead, they say it presents red flags about ICE’s broad data collection and retention policies — indications that the agency is failing to meaningfully confront the long-term consequences of placing migrants under an invasive surveillance regime.

“These technologies represent an assault on people’s bodily autonomy,” said Hannah Lucal, a technology fellow with the immigrant rights legal firm Just Futures Law, which focuses on the intersection of immigration and technology. Alternatives to Detention, Lucal added, “is not a departure from the system of punishment that ICE produces. It’s really an extension of it.”

Although ICE’s e-monitoring program began two decades ago, the number of migrants subjected to electronic monitoring has exploded during Biden’s presidency. When he took office in January 2021, there were 86,000 people in the program. Now, over 280,000 migrants are enrolled in the digital surveillance system.

Migrants assigned to Alternatives to Detention are placed under one of three forms of electronic surveillance: a GPS ankle monitor with 24/7 location tracking, a phone reporting system that uses voice recognition to verify a person’s identity or a smartphone app that uses facial recognition software and GPS location tracking for check-ins. The smartphone app, SmartLINK, is responsible for the exponential growth in enrollment under the Biden administration. About 253,875 immigrants under ICE’s electronic monitoring system are on SmartLINK, according to the most up-to-date statistics from ICE. That’s up from 26,000 people on the app when Biden took office.

SmartLINK has been a focal point of concern for privacy experts. Despite the rapid addition of migrants to the app in recent years, ICE has provided little information about the data it collects and how it might be shared with other agencies. The app is operated by B.I. Incorporated, a subsidiary of the GEO Group, a private prison company, as part of a $2.2 billion contract with ICE. 

Last year, when I spoke with advocates who sued the federal government for more details about the app’s functionality and data collection policies (the lawsuit is ongoing), they posed some key questions about SmartLINK: Is the data collected by the app accessible to other government agencies? Does SmartLINK have the technical ability to gather location data about Alternatives to Detention enrollees beyond their designated check-ins? Does ICE provide adequate oversight of B.I.? Does B.I. have the ability to share the data it collects on Alternatives to Detention participants with third parties, such as other state agencies, or even other companies?

In 50 pages of explanation and assessment, the document does little to answer these questions. Chief among critics’ concerns are questions about location data tracking. The privacy assessment states that SmartLINK app is only able to collect GPS location data at the time of the program participants’ check-ins and when they log into the app. But a F.A.Q. about SmartLINK on ICE’s website complicates the picture. The page refers to another SmartLINK device — a B.I-issued phone with the app pre-installed — given to some program enrollees. According to the F.A.Q., this phone has the technical capability to monitor enrollees’ locations in real time, but “this is not a feature that ICE has elected to use for participants.” 

Lucal, from Just Futures Law, is skeptical. If the agency has the capacity to turn on continuous location monitoring, “there is absolutely no assurance that that is not happening or would not happen,” she told me. The absence of discussion in the privacy assessment about the B.I.-issued phone’s continuous location monitoring capabilities “seems like a gaping hole,” she added. “At any time, it could become active. And how would we know?”

‘Abuse of data is a near-certainty at ICE’

Privacy experts also told me they feared the data collected through Alternatives to Detention could be disseminated to other databases. The privacy assessment acknowledges that there is a risk that the information from the electronic monitoring program could be stored in other databases run by the Department of Homeland Security, which oversees ICE. It alleges that this risk, however, is “partially mitigated” by referring to a DHS policy that states that information is shared within the agency in accordance with the law and only for authorized purposes because officials “must have timely access to all relevant information for which they have a need-to-know to successfully perform their duties.” Jake Wiener, an attorney and surveillance expert with the Electronic Privacy Information Center, says this portion of the policy ostensibly acknowledges that “instead of being mitigated, this risk is an open, ongoing, and harmful practice.”

This is no small matter, Wiener points out, given recent reporting from WIRED that found that ICE employees and contractors abused their access to internal databases to search for information about former partners and coworkers, provided their login information to family members and even shared privileged information with criminals in exchange for cash. The assessment, Wiener says, “fails to consider that abuse of data is a near-certainty at ICE, and that putting that data in more hands by sending it to DHS’s far-reaching databases increases the likelihood of harm.”

The document also reveals that ICE agents and case managers employed by the contractor B.I. have access to the historical location data of migrants who used GPS ankle monitors. The document does not explain why officials would need access to participants’ historical location data — information that could be used to patch together a full picture of enrollees’ routine movements, including where they work and regularly spend time. Last year, a former SmartLINK participant told me he became anxious about the agency’s access to his location data after learning that ICE officers used data collected from workers’ GPS monitors to orchestrate a mass immigration raid at a poultry plant in Mississippi.

“A major concern is that ICE can use any of the data that it extracts to carry out location surveillance of not just the people they subject to these programs, but also anyone who might be in close proximity to them, like family members or people they live with or neighbors,” Lucal said. “There’s just this massive scale of surveillance that’s happening through this program. And the privacy assessment is trying really hard to obscure that, but it’s coming through.”

There is also a glaring omission in the assessment. In an announcement last week, ICE touted its latest surveillance tech tool — GPS-enabled wristwatch trackers —  but there is no mention of the technology in the privacy assessment. One can only wonder how long it will be before ICE endeavors to assess the risks of its newest tool if the agency decides to deploy it en masse when the pilot testing period ends.

White House officials say these technologies are more humane than detention. But they still have adverse, real-world impacts on the people who use them. Migrants I interviewed last year described how the phone app and the agency’s other e-carceration technologies harmed their relationships and employment prospects and brought them emotional and physical distress. 

A 2021 report by the Cardozo School of Law found that 90% of people with ankle monitors said the device negatively affected their physical and mental health, causing everything from electric shocks to sleep disruption, social isolation and suicidal ideation. In interviews with me last year, people forced to use SmartLINK, meanwhile, expressed deep anxieties about the app’s technological glitches, fearing that malfunctions during the check-in process could lead to their deportation. Carlos, an immigrant placed on SmartLINK, described the app as a “shadow” hovering over his family. 

“Every time I get a call from an unknown number and they see it, they think that it’s from ICE asking me where I am.” He said no matter the technology used — from ankle monitors to smartphones — the outcome is the same: “Fear. The only thing that changes is the system.”

The post Immigrating to the US? ICE wants your biometrics appeared first on Coda Story.

]]>
Europe cracks down on China’s abuse of extradition https://www.codastory.com/authoritarian-tech/china-extraditions-italy/ Thu, 23 Mar 2023 14:38:11 +0000 https://www.codastory.com/?p=42055 European courts are blocking extraditions to China, but Beijing has plenty of other tools to target dissidents living abroad

The post Europe cracks down on China’s abuse of extradition appeared first on Coda Story.

]]>
A ruling that went into effect in January by the European Court of Human Rights halting all extraditions to China passed an important test earlier this month when the Italian Supreme Court overturned a decision to extradite a businesswoman to China.

The human rights court had determined that states that are party to the European Convention on Human Rights, which includes virtually all European nations except Russia and Belarus, cannot extradite people to China unless the Chinese government can demonstrate that the extradited person will not be tortured or be subject to inhuman and degrading treatment. This shuts down extraditions to a country that does not allow international scrutiny of its penitentiaries, underscoring international concern over the Chinese government’s widening dragnet that tries to bring home dissidents and critics living in exile.

But China still has the capability to tie down its citizens in lengthy legal battles by issuing Interpol red notices — an international alert that requests other countries find and arrest suspects who have fled abroad for extradition or other legal actions — while also deploying an array of illegal tools of repression. Despite Europe’s attempt to close the door on China’s extradition campaigns, Beijing has ratified a spate of new extradition treaties with countries outside of Europe.

In Liu v. Poland, the human rights court, which is based in Strasbourg, France, ruled that extraditing Hung Tao Liu, a Taiwanese man who had appealed his extradition from Poland, would place him at a significant risk of ill treatment and torture. 

The judgment “substantially reduces the chances of extradition of persons to the PRC”, said Marcin Gorski, referring to the People’s Republic of China. Gorski is a Polish professor of law at the University of Ludz who represented Liu in the case.

China alleges Liu led a major telecommunications fraud. In an earlier case, the Spanish government in 2019 extradited 94 Taiwanese citizens to China as part of the same probe. The human rights court’s ruling covers anyone facing extradition to China, whether they are wanted for political reasons or for white-collar economic crimes.

China’s attempts to bring home dissidents and critics who are Chinese citizens living abroad have been intensifying over the past decade in tandem with China’s integration into the global financial system and its emergence as a world power, according to Nate Schenkkan, a senior director of research at Freedom House whose work focuses on authoritarianism.

Beijing has pursued dissidents in all corners of the world, triggering a response from the U.S. The White House has sought to control technology exports that can be used by China to conduct acts of repression while boosting the capacity of domestic law enforcement agencies to deal with the targeting of Chinese dissidents on U.S. soil. Members of Congress have introduced a bill that would define and criminalize transnational repression in federal law.

Russia’s full scale invasion of Ukraine last year was a wake-up call for Europe to the security threat posed not just by Moscow but by Beijing. But it has been left mostly to courts to protect people from China’s expanding reach.

European officials are failing to take action when it comes to the threat posed by China, often relying too heavily on the legal system to sort out the problem, said Laura Harth, the campaign director at the China-focused organization Safeguard Defenders.

While in many cases it is unlikely that China will be successful in its extradition attempts, the burden of defending themselves means the targets are quickly bogged down in costly legal battles, said Harth.

Europe’s human rights court has come under criticism from governments in recent years, accused of politicizing the domestic affairs of countries in Europe. The U.K. has made attempts to ignore the court’s rulings on granting prisoners the right to vote, and ministers have flirted with the idea of quitting the European Convention in response to the barriers it poses to the U.K.’s controversial plans on national immigration policy.

But for now, the court’s ruling on Chinese extraditions seems to be respected.

A Chinese businesswoman last summer was detained while passing through Italy. She was on her way to collect her kids from a holiday with their father in Greece. China had issued an Interpol red notice for her arrest and then requested her extradition.

Enrico Di Fiorino, a lawyer representing the businesswoman, said the European Court of Human Rights ruling was an important part of her defense and was likely to have played a role in winning the case.

Di Fiorino’s client is now free from extradition in Italy, but if she travels to other European countries, she is still at risk. If an Interpol red notice is issued against her while she is in a country that the Chinese government has an extradition treaty with, she risks being caught up in another lengthy legal battle. Hung Tao Liu, in the Poland case, spent five years in prison while litigating his extradition.

Formal extraditions comprise a small part of China’s larger campaign to silence and intimidate its dissidents into returning home. Coercion and harassment make up the bulk of China’s tactics. In fact, extraditions accounted for just 1% of the overall number of people returned to China. Involuntary returns, which include kidnappings, accounted for 64%.

Dissidents in Europe live in a climate of fear, frequently surveilled while their families back in China are harassed by the state. Several European countries have been investigating these more clandestine operations, most notably the use of overseas police stations, which can be used to silence Chinese dissidents living abroad.

Italy has been accused of hosting 11 overseas police stations. Chinese dissidents in the country are relieved by Italy’s court ruling while still fearful of China’s reach, said Harth.

In December, China ratified extradition treaties with Kenya, Congo, Uruguay and Armenia.

For Reinhard Butikofe, a German member of the European Parliament, this is concerning. But he cautioned that Europe should get its own house in order before European politicians can criticize other countries for cooperating with China’s extradition strategy. “I think before we can credibly approach anybody else, we have to clean up our own act first,” he said.

The post Europe cracks down on China’s abuse of extradition appeared first on Coda Story.

]]>
Forget Milk and Eggs: Supermarkets Are Having a Fire Sale on Data About You https://www.codastory.com/authoritarian-tech/supermarkets-kroger-discount-cards-data/ Wed, 22 Feb 2023 19:00:15 +0000 https://www.codastory.com/?p=40469 When you use supermarket discount cards, you are sharing much more than what is in your cart — and grocery chains like Kroger are reaping huge profits selling this data to brands and advertisers

The post Forget Milk and Eggs: Supermarkets Are Having a Fire Sale on Data About You appeared first on Coda Story.

]]>
When you hit the checkout line at your local supermarket and give the cashier your phone number or loyalty card, you are handing over a valuable treasure trove of data that may not be limited to the items in your shopping cart. Many grocers systematically infer information about you from your purchases and “enrich” the personal information you provide with additional data from third-party brokers, potentially including your race, ethnicity, age, finances, employment, and online activities. Some of them even track your precise movements in stores. They then analyze all this data about you and sell it to consumer brands eager to use it to precisely target you with advertising and otherwise improve their sales efforts. 

Leveraging customer data this way has become a crucial growth area for top supermarket chain Kroger and other retailers over the past few years, offering much higher margins than milk and eggs. And Kroger may be about to get millions of households bigger.

In October 2022, Kroger and another top supermarket chain, Albertsons, announced plans for a $24.6 billion merger that would combine the top two supermarket chains in the U.S., creating stiff competition for Walmart, the overall top seller of groceries. U.S. regulators and members of Congress are scrutinizing the deal, including by examining its potential to erode privacy: Kroger has carefully grown two “alternative profit business” units that monetize customer information, expected by Kroger to yield more than $1 billion in “profits opportunity.” Folding Albertsons into Kroger will potentially add tens of millions of additional households to this data pool, netting half the households in America as customers.

While Kroger is certainly not the only large retailer collecting and monetizing shopper data through the use of loyalty programs, the company’s evolution from a traditional grocery business to a digitally sophisticated retailer with its own data science unit sets it apart from its larger competitors like Walmart, which also collects, analyzes and monetizes shopper data for brands and for targeted advertising on its own retail ad network.

“I think the average consumer thinks of a loyalty program as a way to save a few dollars on groceries each week. They’re not thinking about how their data is going to be funneled into this huge ecosystem with analytics and targeted advertising and tracking,” said John Davisson, director of litigation at Electronic Privacy Information Center (EPIC) in an interview with The Markup. Davisson added, “And I also think that’s by design.”

Kroger did not respond to multiple requests for comment. Albertsons Companies’ vice president of communications Daphne Avila told The Markup in an emailed statement: “At Albertsons Companies, we appreciate the importance of privacy and take appropriate handling of our customers’ data seriously. We recently updated our Privacy Policy so customers can clearly understand our approach to privacy and the policies that we have put in place to protect their information.”

Walmart did not respond to a request for comment.

What Data Does Kroger Collect, and How?

As a Kroger shopper, your information can be collected both online and in person in their stores. 

When you enter a store: If you have a Kroger app on your phone, Bluetooth beacons may ping the app to record your presence and may send you personalized offers. Your location within the store can be tracked as well. (Kroger says your consent is required and the location tracking stops when you leave.) Kroger also says that in “select locations” store cameras are collecting facial recognition data (this is indicated with signs noting the use of the technology.) 

At the register: If you use your loyalty membership (such as Kroger Plus or Boost), detailed information about your purchases gets added to your shopping history, tied to a unique household identifier.

If you are shopping online at Kroger.com: Third-party trackers send your product page views, search terms, and items that you have added to your shopping cart to Meta, Google, Bing, Pinterest, and Snapchat.

According to the Kroger privacy policy, the company will “only collect information when needed for a particular purpose.” Here is some of the information that the company says it may collect, depending on the specific customer:

  • Personal information: Information you provide when you sign up for the loyalty program: name, email address, mailing address, phone number, membership ID, and unique household identifier
  • Purchase history: Historical in-store and online shopping purchases (with no time limits on how long the information is kept while you are a member)
  • Location: Your precise physical location in the store (with your consent), including when you enter and leave a store (Kroger app, GPS, and Bluetooth beacons inside stores)
  • Financial and payment information: “credit, debit, or other payment card numbers, bank account numbers”
  • Health-related information: “Where permitted by applicable law, to serve you better we may make certain inferences about you based upon your shopping history that are health related”
  • Mobile device data: Mobile advertising ID, IP address, browsing data, use of tracking pixels, and cookies
  • Demographic data: “age, marital or family status (including whether your family includes children), languages spoken, education information, gender, ethnicity and race, employment information, or other demographic information”
  • Biometric data: Facial recognition (in select locations, with signs providing notice)
  • Behavioral inferences: “We create inferred and derived data elements by analyzing your shopping history in combination with other information we have collected about you to personalize product offerings, your shopping experience, marketing messages and promotional offers”

The company says in its privacy policy that the data collection is used to fulfill shopper requests, personalize product offerings, improve services, and “support our business operations and functions.” Kroger notes in disclosures to the Securities and Exchange Commission that “[t]hird-party entities do not have access to identifiable customer data.”

A Look at 84.51, Kroger’s Data Company

Founded in Cincinnati in 1883, Kroger counts 60 million households in the U.S. as regular shoppers at 2,750 stores under the nearly two dozen retail brands that it owns and operates (including Ralphs and Food 4 Less). The chain has stores in 35 states (and the District of Columbia) and had annual sales of more than $137 billion in 2021. Kroger noted in a promotional presentation highlighting the potential benefits of the merger that its “alternative profit businesses”—which include financial services (Kroger Personal Finance), retail advertising (Kroger Precision Marketing), and retail data operations (84.51)—could generate $1 billion in profits per year for investors, though in 2021 the company reported that “alternative profits” contributed “an incremental $150 million of operating profit.”

In 2003, Kroger partnered with Dunnhumby, a data science subsidiary of U.K. supermarket chain Tesco. Dunnhumby was an early innovator in the gathering of shopper data through loyalty programs. After a successful 12-year partnership, Kroger purchased a majority stake in Dunnhumby’s U.S. operations and rolled it into its own data science firm called 84.51, named after the longitude of Kroger’s Cincinnati headquarters. 

84.51 is considered a leader in the industry, selling insights to makers of the products sold in stores like Kroger’s. The company’s clients include more than 1,400 companies, including General Mills, Unilever, CocaCola, and Kraft Heinz. The data 84.51 provides to them is used to understand not just what the sales figures are for a given product but also the context of the purchase—context that can only be understood with data about the shopper.

Phil Lempert is the founder and editor of Supermarket Guru and studies trends in the retail grocery business. In an interview with The Markup, Lempert said the type of data that 84.51 sells can answer questions about a particular product: “What is it … that makes a consumer buy it? Is it just price, is it something else? It gives [brands] a road map on how to compete effectively, whether it’s against store brands or competing brands to understand that consumer behavior.”

2,000 Shopper Variables

On its website and marketing materials, 84.51 advertises both the scale and granularity of its data.

“We have collected over 2,000 variables on customers,” claims an 84.51 marketing brochure titled “Taking the Guesswork Out of Audience Targeting.” The historical reach of the data is another selling point, noting that the data includes 18 years of Kroger Plus card data. A page marketing 84.51’s “Collaborative Cloud” says the company has “unaggregated” data about individual product sales “from 2 billion annual transactions across 60 million households with a persistent household identifier.” It adds that this data is “privacy compliant.”

84.51 highlights the broad insights it has gleaned from its shopper data on its marketing pages. A graphic on Kroger Precision Marketing’s website highlights “Ethnic panels (Largest Hispanic panel),” product attributes such as low-sugar and kosher foods, and shopper attributes such as “Lifestyle, price sensitivity, generations, [Household] size and income.”

On a webpage promoting “behavioral analytics,” 84.51 claims “35+ petabytes of first-party customer data, our science—no crystal ball needed.” A petabyte is equal to one million gigabytes. For comparison, Kroger’s trove of customer data is 66 percent larger than the U.S. Library of Congress’s digital collection, which clocked in at 21 petabytes for 2022.

Albertsons’ Avila told The Markup that the company restricts some categories of ad segmentation, stating, “… we do not use groupings related to age, race, gender, ethnicity, income levels or financial status to create customer groups for either our own or third-party promotions.”

How Is Shopper Data Used?

Selling Shopper Insights

Experts told The Markup that companies that sell products in grocery stores don’t have much visibility into what happens after their items are placed on shelves. These brands want granular shopping data that only supermarkets have in order to gauge the success of the brands’ products. In recent years, this data has become harder to come by and therefore more valuable. 

“We’re in a situation now that we’re calling ‘data deprecation,’ ” said Mary Pilecki, an analyst at the market research firm Forrester who studies retail loyalty programs. “Privacy laws have increased globally. There are firms like Apple who are blocking ad tracking software, and of course Google keeps promising that the third-party cookie will go away.” Pilecki said this industry-wide scarcity of high-quality first-party data has left firms scrambling for new suppliers. “Companies are now saying, whoa what do I do? I’m not gonna have this data. Well, loyalty programs are actually a great way to collect this data.”

Targeted Advertising

For supermarkets, collecting intelligence on shoppers is useful not just for selling back to brands but also to enable highly targeted advertising to reach specific shoppers. According to Forrester, retailers and marketplaces were estimated to sell $40 billion in digital ads in 2022, with an expectation of that figure doubling in four years.  

Both Kroger and Albertsons run their own retail ad networks, which provide a way for advertisers to deliver ads to specific targeted segments of their shoppers, optimizing their ad spending. Kroger Precision Marketing (Kroger’s ad network) markets itself to brands and advertisers with the promise of reaching their shoppers via email, digital coupons, apps, online search, influencers, in stores, and even on shoppers’ televisions. For example, the streaming platform Roku launched a partnership with Kroger in 2020 to make “TV advertising more precise and measurable” using Kroger’s data.

One case study on 84.51’s website describes how a snack brand used the company’s data to measure the effect of ads it placed on Roku’s connected TVs. The analysis showed that households that saw the snack ads spent five times more on the brand than the average Kroger shopper. 

For Albertsons’ part, Avila told The Markup that while the company does have “revenue share agreements in place with retail analytics groups,” the company says it takes steps to ensure shopper privacy. “Importantly, we always ensure that customer information we share is de-identified and aggregated, in accordance with our publicly disclosed policy.”

There is at least one piece of evidence that Kroger has been sensitive to clueing shoppers in to its ability to target them this way. A marketing document linked to from Kroger’s website spells out the rules for advertisers participating in Kroger’s retail ad network. Under a section describing the “Tone of voice” to use in Kroger ads, the document notes, “Avoid copy that assumes customer can be identified by: lifestyle, activities, demographics, or gender.”

Regulatory Scrutiny

Kroger’s merger with Albertsons has shined a spotlight on its data collection. While regulators at the Federal Trade Commission are reviewing the proposed deal, lawmakers in Congress have made the companies’ data operations a focus of attention, albeit one that has taken a back seat to concerns about competition, food prices, and the impact of the merger on employees. At a November Senate subcommittee hearing about the merger, Sen. Mike Lee (R-UT) asked if Americans “really need a grocery store chain with wealthy owners who collect more of their personal data.”

Sen. Marsha Blackburn (R-TN) asked Kroger CEO Rodney McMullen, in a letter following up on the hearing, how the combined companies would collect data. McMullen responded, “Our combined customer insights enable us to deliver more personalized promotion strategies, saving customers time and money.”

A witness at the hearing, Consumer Reports’ senior researcher Sumit Sharma, disagreed that the merger would provide any obvious new benefit in personalizing the shopping experience for consumers. “The difference seems to be that a combined Kroger Albertson will be able to analyze data from approx. 85 million households post-merger rather than 60 million households pre-merger. No evidence is presented to suggest that the ability to analyze data on an additional 25 million households would materially improve capabilities to personalize experiences,” he stated in his written response for the record, answering a question from Sen. Thom Tillis (R-NC).

Sen. Amy Klobuchar (D-MN), chair of the Subcommittee on Competition Policy, Antitrust, and Consumer Rights, told The Markup in an emailed statement, “Most people have no idea that some stores are collecting data on what groceries they buy to sell to the highest bidder or using facial recognition technology to track them as they shop. This situation is especially concerning given how few options consumers have for grocery stores in many communities. Americans deserve protections against excessive surveillance and companies misusing their personal data. It’s time to pass federal privacy legislation to protect consumers.”

Privacy Concerns

Understanding the Value Exchange

Kroger’s loyalty programs are extremely popular with shoppers. The company says that 96 percent of all purchases at its stores are tied to a loyalty card. For shoppers, the loyalty program appears to offer a simple, worthwhile bargain: In exchange for your shopping data, the store will give you significant benefits in the form of discounted prices, coupons, fuel discounts, and a personalized shopping experience. 

“What isn’t necessarily disclosed,” said Stephanie Liu, an analyst at Forrester who researches consumer privacy, “is that it is not just tracking what you buy, it’s building audiences and segments off of that,” or in other words, bundling you with other shoppers based on a shared behavior, demographics, or inferred interests. Advertisers can then target these groups.

“Yes—you’re getting discounts, but the number of parties involved who are accessing your shopper data grows exponentially,” said Liu. Kroger’s privacy policy does not disclose how many companies Kroger shares data with.

Albertsons’ Avila told The Markup, “We have made significant efforts to be transparent about how we use customer information.” Avila added, “We understand the importance of privacy and take steps, such as the anonymizing and aggregation of potentially sensitive demographic information, to protect our customers’ privacy.”

Sensitive and Unique Data

While Kroger and Albertsons stress that they only share “de-identified” or aggregated shopper data, research has shown that the unique combinations of things we purchase, and the time and place of those purchases, can be as re-identifiable as mobile phone location data. A 2015 MIT study found that in a large set of anonymous shopping data, 90 percent of shoppers could be re-identified using as few as four purchases with a known price, purchase date, and store location. 

As supermarkets expand the variety of goods they sell to include health care and medical products, sensitive information about our lives, relationships, and finances can be gleaned by the patterns of what we buy there. 

“So I look at your shopping cart and I can tell if you’re a carnivore, if you’re vegan, if you like ethnic foods, if you only buy kosher foods,” said Supermarket Guru’s Lempert.

“You can infer if you’re a parent and what stage of parenthood you’re at depending on what size diapers you’re buying,” Forrester’s Liu said. “You can infer race, gender, and generation. And the kicker though is you can confirm a lot of that when you compare this data with third-party data.”

Shopping for sensitive items on Kroger’s website highlights another privacy concern. 

A Markup analysis of browsing and shopping for a pregnancy test on Kroger.com showed that searching for the product, viewing an individual product page, and adding the item to the cart all activated trackers that transmitted the product name and a user ID to Meta, Google, Pinterest, Snapchat, and Bing, among other companies. 

What Can You Do About It?

Thanks to various state privacy laws, residents of California, Nevada, and Virginia can opt out of data sales but still remain a member of Kroger’s loyalty program and continue to receive discounts. You can request to opt out here. California and Virginia residents can also request a copy of their data, and request that their data be deleted.   

Some grocery chains like Trader Joe’s choose not to offer loyalty programs and say that they do not sell shopper data. Trader Joe’s website notes “… we don’t have sales, we don’t offer coupons, and there are no loyalty programs or membership cards to swipe at our stores. Trader Joe’s believes every customer should have access to the best prices on the best products every day.” In a recent episode of the grocery chain’s podcast, a company spokesperson said, “We don’t collect any data on our customers.”

EPIC’s Davisson said regulators can play a role by forcing companies like Kroger “to minimize [the] collection, retention, transfer and use of … data to what is reasonably necessary to serve the consumer.”

Lempert suggests that there may not be much you can do as an individual if you want to avoid being tracked. “I would just say that if you’re concerned about privacy and a supermarket or any retail store being able to sell your data, don’t sign up for the frequent shopper card and pay with cash.”

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

The post Forget Milk and Eggs: Supermarkets Are Having a Fire Sale on Data About You appeared first on Coda Story.

]]>
In a cashless society, banking and tech elites control everything https://www.codastory.com/authoritarian-tech/cashless-governments-control/ Wed, 08 Feb 2023 10:01:52 +0000 https://www.codastory.com/?p=40164 A world without paper money should worry us, says author Brett Scott

The post In a cashless society, banking and tech elites control everything appeared first on Coda Story.

]]>
As central bankers and governments around the world move us inexorably towards cashlessness, there remains considerable resistance. In Italy, prime minister Giorgia Meloni tried, much to the displeasure of the European Commission, to enable Italian businesses to refuse card payments for transactions under 60 euros ($64). While in Nigeria, people have taken to the streets to protest cash shortages as the country switches to new currency by February 10 as a step towards encouraging more digital payments. And in Switzerland, an advocacy group recently collected enough signatures to force the authorities to hold a referendum on introducing clauses to ensure the country cannot go entirely cashless. 

Writer Brett Scott has been covering how the banks are working towards a cashless world and what’s in it for them. His 2022 book, Cloudmoney, chronicles “cash, cards, cryptocurrency — and the war for our wallets.” He’s skeptical about the idea that the world is heading, irrevocably, for a future where cash doesn’t exist, where we can pay for everything with the swipe of a smartwatch or the blink of an eye.

Brett Scott is photographed on March 2, 2020 in London, England. Photo: Manuel Vazquez/Contour by Getty Images.

Scott argues that a cashless society would sound the death knell for small businesses, and wipe out any remaining privacy we have, paving the way for a fully-fledged surveillance system. He’s campaigning for us to hold on to cash —- old, slow, and dirty as it may seem — if we want to hold onto our freedom.

This conversation has been edited for length and clarity.

Was there a moment in your personal life where you were suddenly switched on to the implications of a cashless future?

I’ve had a high degree of tech skepticism since I was very young. I was always suspicious of being told that I had to endlessly update. I was then working in finance and I also had a background in economic anthropology. I noticed a lot of the conversation around cashless societies was deeply inaccurate. People had internalized this idea that digital money was an upgrade to cash. They say things like — “my grandmother still likes cash, but she’ll eventually have to get with the times.” But really, they’re two systems that work in parallel. 

Are you saying people shouldn’t use digital money?

I’m not saying that. I’m saying that if you didn’t have another option, the digital payments system would become very oppressive. Think of it like Uber versus bicycles. So we might like the Uber system and find it convenient, but we don’t want our entire transport under the control of Uber, right? Uber can be a positive thing — so long as you have the choice to not use it. Bikes can’t take you on long trips, they’re more localized. But they have their advantages. You can get around when there are traffic jams, you have autonomy over a bike, you control it yourself — and you can’t be tracked while riding them. 

Have you been following what Italy’s Prime Minister Giorgia Meloni has been doing recently? She’s had quite a lot of backlash for saying cash is still king. 

Well, she’s actually — bizarrely enough — the only politician that I know right now who is channeling a pretty left-wing take on money. And she’s absolutely right in the sense that all digital money is private. Cash is a public form of money issued by central banks or state entities. Whereas anything you see in a bank account is privately issued by the bank. Think of bank deposits like digital casino chips. And I’ve almost never seen a politician that actually understands that. So when Meloni says that the “cashless society is like the privatization of payments,” it’s absolutely true.

But she has had a lot of criticism. People are claiming she’s helping uphold Italy’s black market and all the criminality and tax evasion that goes with it.

If you want to create a hygienic society and destroy all forms of black market deviance, whether it’s criminal or not, you’ll end up with corporate domination. Let’s say you try to crush all forms of shitty behavior by forcing everybody to use the banking sector. Well, now you’ve created a whole bunch of new problems. You’ve created serious resilience problems in the economy. You’ve created credible new vectors for inequality. Your banking elites, your tech elite, suddenly now control everything: all access to economic interaction in your society. If you suddenly defer control of the entire system to an oligopoly of private sector players, that gives them enormous power. You have to maintain the cash system if you want to create counter-power to that.

Now all those players and a bunch of other people are going to argue that the cash system is allowing various forms of black market crime to exist. But the fact is, the cash economy has always been associated historically with the most marginal people in society. And a cashless society probably wouldn’t actually solve the problem of crime — it’s well known that the banking center is extensively used for crime all the time. 

What does a cashless society mean for the surveillance industry?

A cashless world leaves these huge data trails. There are well-known examples of intelligence agencies spying on payment networks. Right now, the worst excesses of that type of surveillance are dampened because there is an alternative, right?

You mean there’s currently a way to fly under the radar by using cash. 

Right. The thing about the cash system is that you can’t steer people’s behavior. Once it’s out of the system, cash becomes far more localized and has a much more organic way that it moves around. But let’s say there’s a total implosion in the cash system, and it’s allowed to happen. Maybe the world wouldn’t necessarily immediately become some giant surveillance state –– but the potential for that outcome becomes much much greater. A cashless world has crazy potential for surveillance. And crazy potential for censorship.

What does a cashless society have to do with censorship?

It’s about the ability to control people through their behavior. People’s activities can be monitored — but they can also be blocked by simply freezing their accounts. Think about the crazy levels of trauma faced by someone who can’t get access to the banking sector in a society that won’t take cash. Think about the crazy levels of economic terror that a person would face if they got excluded from the payment system in a cashless society. Right now we have a buffer against us if we get locked out of the banking sector, like if our cards are lost or stolen. We always have cash as a backup. 

What do you think will happen if no one starts to engage with the arguments against a cashless society?

I don’t think most people want a cashless society. If you ask people if they like digital payments, most people will say yes. But if you ask them if they want cash to be taken away from them most people will say no. People don’t like having options removed from them. But many people aren’t able to articulate this, say, in the bougie coffee shop that only accepts digital payments. Many people feel a bit weirded out by the fact that they can’t use cash — but often, they don’t have an argument. They can’t articulate it. And they have no ideological support from the political class and the business class. So they’ll just think “oh, well, I guess I’m a bit old school or something.”

So how does a cashless society take shape?

It’s kind of a feedback loop. The bank stops taking cash, meaning small businesses can’t deposit cash, which means they’re less likely to accept cash. So then access to cash goes down. ATMs start closing. And so on. In order to stop this feedback loop, you have to actually act against it — and start putting in access to cash laws, like what Meloni is doing. And you also have to actually build a cultural movement that says it’s totally okay to demand a non-automated form of payment. It has to go against this narrative that we all want a cashless world because it’s so convenient because it’s cleaner, because it’s faster, and so on. Because the reality is, for all this so-called convenience, people are more burnt out than they’ve ever been before. We have less time than we’ve ever had before. We’re more confused and disorientated than we’ve ever been. And this is what happens in an accelerating capitalist system. And if you don’t sync up, you get thrown off the edge.

The post In a cashless society, banking and tech elites control everything appeared first on Coda Story.

]]>
Watching the streets of Medellín https://www.codastory.com/authoritarian-tech/medellin-surveillance/ Tue, 07 Feb 2023 13:30:21 +0000 https://www.codastory.com/?p=40005 The surveillance cameras of Colombia’s police are no match for the hundreds of “eyes” employed by street gangs

The post Watching the streets of Medellín appeared first on Coda Story.

]]>

Watching the streets of Medellín

The surveillance cameras of Colombia’s police are no match for the hundreds of “eyes” employed by street gangs

The sound of a siren broke the stillness of the second night of the new year. A city surveillance helicopter operated by the National Police hovered over the foothills of western Medellín, raking a beam of light over rooftops. Were they after someone? Was there an emergency? The media reported nothing about it. Perhaps it was just a security drill.

Special series

This is the first in a series of multimedia collaborations on the evolving system of surveillance in medium-size cities around the world by photographers at Magnum Photos, data geographers at the Edgelands Institute, an organization that explores how the digitalization of urban security is changing the urban social contract, and journalists and essayists commissioned by Coda Story.

We’ll be bringing you stories from cities around the world, next up: Geneva, Nairobi and Singapore.

The Hawk, as authorities call it, is a U.S.-made Bell 407 and has been flying over Medellín since May 2017. Replete with high-tech features, the helicopter is worth close to $2 million. It is meant to monitor the city from the air, to bring people a sense of security. Residents of the valley where Medellín is nestled greet the Hawk with good humor.

At night, as it passes over areas where hundreds of people congregate to enjoy themselves, the reaction to the Hawk is almost festive: people raise glasses of beer, cigarettes and joints in mock salute. They poke fun at the ineffectiveness of the machine. “From so high up, it’s hard to control what happens on the streets,” some say.

The people who actually watch what happens on the streets also make fun of the Hawk. It is these watchers, not the cops, who are effectively in charge of urban surveillance. With dozens of “eyes” placed in strategic locations, they scrutinize the crowds, particularly around the city center. Their goal is to stop their enemies and the government from harming the businesses they defend, especially the illegal ones.

How many of these watchers are there? It is hard to say. Those who study the phenomenon suspect that hundreds of people, some as young as fifteen, are hired to be the eyes and ears of the organized crime and illegal trade networks that dominate Medellín. Their commanders run lucrative operations: Many sell cocaine, synthetic substances and marijuana. Some smuggle alcohol, clothing, toys, jewelry and household appliances. Others run prostitution rings, even selling sex with underage girls and boys.

Day and night, these youthful eyes observe every person who passes through the areas under gang control. Standing on street corners and protecting their faces with wide-brimmed hats, they watch for suspicious movements and make sure that no one disturbs the order they have imposed. The tension in the air is unmistakable. A simple whistle to get the attention of a friend who has just arrived is cause for reproach.

Drug gang lookout in a back alley in a barrio in Medellin.

The police stand idly by, perfectly aware of what is going on under their noses. Many of these officers are bribed to keep mum.

Armed gangs patrol dozens of outlying city neighborhoods, as they have for decades. A culture of organized crime has imposed this order. It has disciplined citizens into tolerating a fairly coercive surveillance system. This even involves restrictions on the movement of people and vehicles when necessary. People moving in and out of these areas are regularly made to pay a small sum to ensure their safety.

“Young people on motorcycles control the area,” says an older resident. “Their justification is that they are protecting us.” They monitor people’s movements to ascertain who enters the area and whether they have family or friends. If nobody knows who you are, you can be confronted, thrown out of the neighborhood entirely or even beaten up.

Trucks and vans that supply food to small neighborhood stores are viewed with suspicion. They are seen as a point of vulnerability, a vehicle for possible infiltration by police, or by rival gangs. When trucks come into the neighborhood, the watchers approach and make drivers explain what they’re doing. They too must pay a fee to transit through their streets. Commonly known as a “vacuna” (vaccine), this type of extortion escapes police surveillance. It is rarely reported.

Cell phones have become the main working tool for these armed gangs. Gang leaders lure young men into their criminal enterprise by offering them a motorcycle and a cell phone, often high end, alongside regular pay. Many succumb easily to these offers, seeing no other way to afford such desirable commodities.

Aside from preventing other gangs from taking over their streets, the watchers also protect the areas where they sell illicit substances, their other source of income. Violence erupts when these controls are breached. One such incident last year in the Robledo area, west of the city, left at least seven people dead.

“Do you know why that happened? Because of a dispute over plazas de vicio (illegal drug dealing places),” says a cab driver who lives nearby. Police personnel and representatives of the Medellín mayor’s office echoed this account in statements to the media. Such turf wars are beyond the control of the authorities and are rarely captured by the city’s surveillance cameras. 

It begs the question: what about all the cameras in this heavily surveilled city and why do they so rarely record its many deadly gangland battles?

The surveillance cameras that the mayor’s office and the National Police have mounted in various locations around the city are also mocked, much like the Hawk when it makes its nightly rounds over the city. Medellín has more than 2,800 different video devices that are managed through closed-circuit television from a command center. 

The majority of Medellín’s security cameras are situated in the city’s bustling commercial center, where more than a million people pass through each day.

“Here they charge vacunas to all of us,” says a street vendor. “They charge them to those who sell coffee, to those who sell clothes and to those who sell food.”

Another vendor steps in to clarify: “The extortions are considerably more skilled and well-planned now. They even give us bank accounts to pay the money into, which the cameras fail to notice.” He gestures to one of these cameras, next to where he sells cell phone accessories on a wooden cart designed for moving the goods through congested city center streets.

Public space is highly coveted in downtown Medellín. Official figures from the mayor’s office say there are 26,000 street vendors, but the real number is closer to 50,000, all of whom must coexist alongside formal businesses. Street vendors and business owners alike must pay for protection, which generates a high income for criminal groups.

But these “security” fees make up only a small portion of the profits earned by gangs in this congested urban area. The monitoring of the entry of illegal goods coming in from around the world is what fuels a major portion of this illicit economy. Unauthorized imports of toys, cell phones, jewelry, apparel, sneakers, televisions, stereos, cigarettes and alcohol pour into Medellín and are distributed to vendors in the city center. “These commodities arrive in trucks between Thursday and Sunday night, are unloaded in under 15 minutes, and are housed in various facilities,” says a person familiar with the routine. Criminal organizations have put in place a robust security system to guard against their seizure or theft. 

Some say that the police surveillance cameras nearest to the loading sites are either “broken or deliberately switched off.” There is much in Medellín that the cameras do not see — the activities in the so-called “Barrio Antioquía,” for instance, on any given night. The neighborhood is one of the most surveilled in the city and located less than three miles from the city center.

Security cameras downtown Medellin and at the old home of Pablo Escobar.

A tourist from Madrid approaches a cab on the edge of a park in El Poblado, the commercial heart of Medellín’s upper class. “Can you take me to the neighborhood?” he asks. The driver briefly assesses the situation and then, with a nod, signals to the tourist to get into the back of the car.

After a roughly ten-minute journey, the vehicle reaches Barrio Antioquía, where, for more than forty years, large volumes of drugs have been distributed and sold. Here, there are eyes everywhere. The cab turns into the neighborhood and then down a winding street with dilapidated homes on either side before coming to a corner where a group of young people are huddled together. They wear baggy jeans, Oakley t-shirts, brand-name sneakers, hats and dark sunglasses. Fanny packs are strapped around their chests. This is where the merchandise is stored. They talk casually among each other and occasionally scan the streets for anyone who might be looking for what they have to offer. One of them spots the taxi. Without hesitation, he walks up to the car. “What’s up man, what are you looking for?” Without getting out of the cab, the tourist asks for 50,000 COP (about $11) worth of cocaine. This buys him about five grams of “escama de pez” (fish scale), one of the purest forms of cocaine in the streets. The dealer looks in his fanny pack, locates the little baggies of white powder and hands them through the car window, in exchange for the money. The cab pulls away and rolls back to El Poblado.

This type of business is conducted day and night. A source tells me that 72 people, most of them quite young, are employed as campaneros (vigilante watchmen) to keep a 24-hour vigil, to make sure that nothing interrupts or affects the brisk sales. The campaneros are why tourists, and local residents, can hop into a cab, buy illegal drugs without getting out of the vehicle, and leave the neighborhood as quickly as they came.

Night views of a barrio controlled by a small drug gang, who largely make a living selling cocaine to students from the nearby university.

Barrio Antioquía emerged in the mid-twentieth century to house mainly local industrial workers. By the 1960s, it became the main market for contraband cigarettes. Later, the sale of marijuana, cocaine, and other drugs began with the backing of Pablo Escobar’s Medellín Cartel. Over the years, it consolidated as an area controlled by gangsters.

The neighborhood is not blighted or bleak. It has good access roads, strong commercial and industrial activity and is next to the Enrique Olaya Herrera airport. And it has an international reputation among drug users. The authorities can do little.

Surveillance cameras once promised a brave new world of policing neighborhoods like Barrio Antioquía. If surveillance footage is used to arrest a dealer, police boast about the effectiveness of cameras. But the gangs operating in the barrio have destroyed and replaced official surveillance networks with their own CCTV cameras. “Barrio Antioquía is the most protected place in Medellín,” says an insider in the area. “For criminals, it is the city’s crown jewel.”

But they couldn’t have taken over the neighborhood without cooperation from police. This was always suspected, but then it was proven, when 22 police officers were detained in December 2020 for their links with drug traffickers operating across Barrio Antioquía.

For several years, two illegal activities have been ring fenced from scrutiny and investigation. The first was the storage of illegal substances in warehouses and houses with armored doors. The second was the crystallization of coca base into cocaine hydrochloride. In addition to the laboratories installed in rural areas near the city, for at least six years there have been laboratories operating in urban areas. An expert told me that one of these labs was located just outside of Barrio Antioquía.

Local authorities say they want to put an end to these problems that have persisted for decades. The city has invested $2.6 million dollars in the construction of a modern police station in Barrio Antioquía. Construction began in January 2021, and is nearing completion on a four-story building sitting on 43,100 square feet of land. The objective is to strengthen security and counteract micro-trafficking in the neighborhood.

Police conduct a spot search for drugs and weapons in a downtown Medellin bar. Police set up checkpoints on main streets in hopes of interdicting illegal behavior. Small amounts of cocaine were found, but no arrests were made.

Police, with or without imposing new buildings, have their work cut out for them in Medellín. It is a city in which the illegal economy has been consecrated in the blood and fire of generations of criminals.

The system is vast and sophisticated. And its footsoldiers, its building blocks are the watchers on the streets who keep tabs on everything so that the flow of business, in drugs, prostitution and contraband, is unhindered. The constant surveillance is also key to the control criminal gangs exert over the general population. In each of Medellíns 249 neighborhoods, there are 72 people hired by gangs to keep watch, not counting those monitoring the illegal CCTV cameras.

Despite large investments in technology, the authorities have not yet managed to dismantle these criminal organizations or disrupt their influence. Although there are flurries of police activity from time to time and arrests made, eventually and inevitably it’s not long before the gangs are back and doing business as usual. 

Local authorities boast of having a modern digital security apparatus with hundreds of surveillance cameras and the Hawk up in the sky as a noisy reminder that the city is being watched. But the gangs’ eyes on the streets see more than cameras, drones and helicopters do.

These watchers are closer to reality and are more agile in how they respond. As one expert sardonically told me, surveillance networks see only what those who control the networks want to see. “When the pay is good,” he says, referring to corruption, “everyone looks the same way.”

The post Watching the streets of Medellín appeared first on Coda Story.

]]>
How surveillance tech helped protect power — and the drug trade — in Honduras https://www.codastory.com/authoritarian-tech/honduras-surveillance-drug-trade/ Tue, 31 Jan 2023 15:05:59 +0000 https://www.codastory.com/?p=39645 Our investigation of how big-name monitoring software from Israel and the US made Honduras a hotbed of spy tech

The post How surveillance tech helped protect power — and the drug trade — in Honduras appeared first on Coda Story.

]]>

How surveillance tech helped protect power — and the drug trade — in Honduras

I. Hery Flores kept calm when officers approached him as he departed a Tegucigalpa gas station in the early summer of 2021. Only one was in a blue police uniform. The others wore jeans and button down shirts — typical attire for police investigators in Honduras’ capital city. But then they slapped on handcuffs without giving him time to react. Later he realized they never showed him an arrest warrant.

They shoved Flores into a car without license plates and drove him around on Tegucigalpa’s winding roads. The officers interrogated him for more than two hours about his political activity as a student at the National Autonomous University of Honduras and as a member of a political party called Libre. Other Hondurans who had been murdered or disappeared in recent years — Berta Cáceres, Sneider Centeno, Vicky Hernández — flashed through Flores’ mind. “Their primary objective wasn’t to arrest me,” he later told me.

He persuaded one of them to let him call his mother. “The police have detained me and I’m at the Kennedy police station,” he told her. Within the hour, people were calling for his release on social media. One was the Libre party presidential candidate Xiomara Castro. Another was her husband, Manuel “Mel” Zelaya, the former Honduran president who was deposed through a coup d’etat in June 2009 and later became head of the Libre party.

Before the day was through, Flores had been charged with “aggravated arson.” Authorities alleged he set a pharmacy on fire during a 2019 protest, which he denies. Flores spent a week and a half in pre-trial detention in a maximum-security prison until a judge decided he was not a flight risk and granted his provisional release while his trial was ongoing.

This was Flores’ first sustained encounter with police, and yet they seemed to know him well. He later learned that over two years’ time, they had compiled a 300-page file on him. He had spotted police stationed outside his home from time to time. Some photos in his file came from social media. But other images he didn’t recognize seemed to have been taken from a distance, perhaps by someone who was following him. While on the phone, he had sometimes heard noises, similar to the sound of fingers typing on a keyboard.

Hery Flores driving through Tegucigalpa, Honduras.

Flores wasn’t alone. He was one of an untold number of Hondurans caught up in a complex web of surveillance tools and tactics deployed by a state determined to protect its own power and preserve its status as Central America’s largest drug corridor. Presiding over this regime from 2014 until 2022 was Honduran President Juan Orlando Hernández. After he left office in January 2022 , Hernández — or “JOH” as he was commonly known in Honduras — was extradited to the U.S. on drug trafficking and weapons charges. He is currently awaiting trial in the Southern District of New York.

Dozens of interviews with current and former law enforcement officials, technical experts, activists and lawyers, and an extensive review of documents and court filings obtained through Honduras’ public information access law offer a detailed (if still incomplete) picture of the digital surveillance apparatus that dominated Honduras during the Hernández administration. What follows is the story of what happens when surveillance companies sell their products to a government known to be carrying out widescale human rights abuses against its people and of the potential reform to this abusive digital surveillance apparatus as it heads into year two of Xiomara Castro’s administration.

A Xiomara Castro banner hangs from the National Stadium in Tegucigalpa, Honduras, on the day of her inauguration as the first female President.

The roots of these problems run deep for Honduras. The Central American nation has long faced endemic poverty and violence and today has one of the highest murder rates in the world. An estimated 120 tons of cocaine passed through the country in 2019. All this has driven hundreds of thousands of Hondurans to migrate to the U.S. in recent years.

The Honduran government also has a long history of monitoring its own citizens. A former army general told Honduran media that the country has been able to illegally intercept phone calls for at least 40 years.

But under Hernández, who was in office from 2014 until 2022, decades of intrusive but old-fashioned government monitoring began to look genteel. Hernández supercharged the state surveillance apparatus, first by passing laws to enable overreaching surveillance and then by bulking up police snooping powers with cutting-edge software. During his administration, Honduras acquired some of the most advanced AI and digital forensics tools from the biggest brands in the business, like Israel’s Cellebrite and the U.S.-based military contractor Palantir. There is also evidence of Honduran officials using other major surveillance tech products including Circles, i2, Galileo and Pegasus.

Why would a country of 9.5 million people need so many hardcore surveillance tools? To monitor anyone threatening to expose its wrongdoings or challenge its power. And to ensure its hold on a thriving drug trade.

The case brought against Hernández by the U.S. Justice Department accuses him of leveraging “the Government of Honduras’ law enforcement, military, and financial resources to further his drug trafficking scheme” by sharing sensitive information from law enforcement and the military with drug traffickers to facilitate shipments. A drug trafficker famously quoted Hernández saying he would “stuff the drugs right up the noses of the gringos.” For his part, Hernández denies all these allegations. He says they are the result of drug traffickers trying to get back at him for cracking down on the drug trade during his time as president. 

But whether they were working for the state or trying to hold the state to account, people who were on the ground during Hernández’s time in office tell a different story. Opposition politicians, journalists and activists were under constant watch, both in real life and online. Phones were regularly confiscated and illegally compromised. Hernández built up a system that allowed him to access personal information on anyone in the country, on a whim. A former police officer, who wished to remain anonymous because he feared reprisals from the state, explained that Hernández positioned military officers within law enforcement departments as “consultants.” They would report back to him any information of interest, whether it was related to drug trafficking, political targets or other sensitive matters. But in the eyes of this officer, one thing was clear: “Nothing moved in Honduras without JOH finding out.”

Former President Juan Orlando Hernandez on a visit to Germany in 2015.
Photo: Popow\ullstein bild via Getty Images

Revelations about the reach of the Honduran digital surveillance apparatus also raise serious questions about the companies that build and sell these technologies. Tamir Israel, a researcher at Human Rights Watch, explained that these companies are required under international human rights laws to act responsibly and conduct due diligence in their sales. But many experts I spoke with agreed that from Azerbaijan to Mexico to Saudi Arabia, enough abuse has been exposed over the years to know this is not happening across the industry. Angela Alarcón, a Latin America campaigner at the digital rights organization Access Now, said that companies often try to shirk responsibility for human rights abuses by claiming their clients — mainly governments — hold sole liability for what happens to their targets. But companies are also responsible, she said: “Why? Because their technology can be used as a weapon that can destroy lives.”

II. HOW TO BUILD A SURVEILLANCE STATE

In 2009, a coup by the right-wing Honduran military, tacitly supported by the U.S., removed the president — Manuel “Mel” Zelaya, now head of the Libre party — from office and plunged the country’s already weak institutions into chaos. Thousands took to the streets in protest. Police and the military responded by violently repressing demonstrations, arresting protesters and enforcing a curfew. A subsequent commission found security forces were responsible for at least 20 deaths. In December of that same year, the head of the country’s drug trafficking unit in the public prosecutor’s office, Julián Arístides González, was murdered. Documents later showed that the police planned his killing at their headquarters.

The already-growing drug trade only further prospered in Honduras over the next decade, lending it the unenviable moniker of a “narco state.” Some police became directly involved in the drug trade. Others, under extreme pressure to protect drug traffickers from the law, were corrupt by default.

“The police have been at the service of drug traffickers and organized crime,” said Maria Luisa Borjas, a former police officer. “Honest officers who have wanted to remain part of the police have had to become deaf, blind and mute because their lives and the lives of their families are in danger.”

Borjas knows these dynamics well. She was once the head of internal affairs in the national police, where she was charged with holding officers accountable to the law. After she investigated extrajudicial killings carried out by a group of high-ranking police officers in 2002, she was fired from her job. One of the high-ranking officers she identified, Juan Carlos “El Tigre” Bonilla, later became the police director. Events surrounding Borjas’ firing and Bonilla’s tenure created further scandal that is now being laid out before a federal court in the U.S. where Bonilla, just like his former boss, is being tried on drug trafficking and weapons charges.

In 2012, when Juan Orlando Hernández was the president of the Honduran Congress, legislators passed a law colloquially known as the “Wiretapping Law,” which allows authorities to listen to phone calls and intervene in other forms of communications. 

“It’s something they were already doing, but in 2012, they legalized the practice,” said Hedme Sierra Castro, who has tracked digital surveillance of activists as part of the NGO ACI-Participa.

In 2013, a new attorney general — with close ties to Hernández — was elected to lead the public prosecutor’s office. He still presides over the institution and has been implicated in a U.S. court for aiding drug traffickers.

Hernández came to power in 2014 and used a special security council to build out a sweeping surveillance regime. Yet at the same time, crime soared and the drug trade continued to thrive. It was all for the purpose of “protecting the drug trafficking business,” said Honduran journalist and criminologist Wendy Funez. “They created an apparatus using the power of the state.”

The inner workings of the security forces in the Hernández government were complex, with more than a dozen different special forces between the police and the military. The military was the “brains” behind his security apparatus, explained Wendy Funez. “But the police carried out the orders,” she said. 

The situation attracted some international attention in March 2016, after Goldman Prize-winning environmentalist Berta Cáceres was murdered in her home. Her death underscored how dangerous post-coup Honduras had become for anyone who opposed the status quo, no matter how much prominence they had reached internationally. The killing shook the country. A U.S.-trained former military officer, who had also been the president of the hydroelectric energy company Desarrollos Energéticos, was later convicted of ordering her murder.

A month later police officials ostensibly admitted they had a corruption problem and ordered a clean-up of the 12,000-person force in a process known as the police “depuración,” or purification. But “it was a farce,” Borjas said. Hundreds of officers who were kicked out of the police force have since sued the government, alleging that they were expelled for speaking out against sexual assault or opposing Hernández’s government. Or because, like Borjas, they themselves had exposed instances of corruption.

Although Honduras maintained the quiet support of the U.S. throughout the Hernández administration — the country is home to a U.S. military base, along with Chiquita (formerly United Fruit) and the palm oil giant Dinant — JOH’s immediate extradition at the end of his last term signaled a change of tune. While he was still in office, his brother, Juan Antonio “Tony” Hernández, was famously tried and convicted of running a major cocaine trafficking operation in the U.S. in 2021. In judicial proceedings, it came out that he had accepted a roughly $1 million donation to his brother’s presidential campaign from the notorious leader of Mexico’s Sinaloa cartel, Joaquín “El Chapo” Guzmán. The former president’s brother (and Guzmán) have since been tried and convicted of major drug trafficking crimes. Both are serving life sentences in the U.S.

III. THE PALANTIR UNIT

Within the byzantine internal structure of the Honduran National Police, the Police Directorate of Investigations is charged with investigating serious crimes, such as homicides. In the working class Kennedy neighborhood of Tegucigalpa, a multi-building complex houses the Directorate, with more than a dozen forensic and ballistic labs, investigators’ offices and a floor dedicated to Interpol offices. It is also home to a room with about 10 computers in it, known as the Palantir Unit. 

One Saturday afternoon, when I visited the Palantir Unit, I met a lone staffer. Her job was to respond to requests by investigators for information from the Palantir platform, she explained. The U.S.-made software creates information-rich reports that profile people of interest, and it can diagram their social and family connections, drawing on information from the national census, public healthcare systems, customs and immigration, the names of relatives of people under investigation, vehicle registration information and other sources. Public records show that the Honduran police have run more than 12,700 searches through the platform since 2015, when the Palantir Unit started. Honduras’ secretary of defense and the public prosecutor’s office, through a special criminal investigation unit known as the ATIC, can also carry out Palantir requests. The secretary of defense has run 380 searches since 2014 and ATIC has run 14,157 since 2015. 

“Palantir is a database that opens a big window,” said Cristian Nolasco, the Police Directorate of Investigations spokesperson, who has been part of the police since 1998. 

Known mainly for its large contracts with U.S. law enforcement agencies and the U.S. military, Palantir in recent years has expanded its operations, selling to governments around the world and to the private sector. Now headquartered in Colorado, Palantir was co-founded in 2003 by tech billionaire Peter Thiel, and it received early seed investment of $2 million from In-Q-Tel, the U.S. Central Intelligence Agency’s venture capital arm. The publicly-traded company had a market cap of $13.3 billion at the start of 2023. 

Palantir was recently profiled in the Washington Post for the technological edge its algorithm gives to Ukraine’s armed forces. It also has contracts with the U.S. Immigration and Customs Enforcement, that Amnesty International says have “[enabled] ICE to identify, share information about, investigate, and track migrants and asylum-seekers” who have been arrested in workplace raids.

Luís, who was in the police force during Hernández’s first term in office, said that he often filed requests for Palantir reports while investigating local organized crime groups. The software analyzed information to make it easier for intelligence officers like Luís — who says he received training from Americans and Colombians — to track down their targets and understand their movements. Luís asked that we not use his real name, as he is legally prohibited from speaking publicly about his work.

Luís would hand all intelligence reports to his supervisor, and he knew that reports eventually made their way to President Hernández himself. Based on his knowledge of the software’s capacity, Luís said that above all else, it helped Hernández “to control the opposition” and “to neutralize his opponents within organized crime.” Cristian Nolasco, the police spokesperson, said that even though the Palantir Unit is housed within the national police, it was the military that really had control of the technology under Hernández, through the National Directorate of Investigation and Intelligence, an opaque institution comparable to the U.S. Secret Service.

Palantir’s software is just one among many powerful surveillance tools that the Police Directorate of Investigations has at its disposal. Since at least 2012, the police have used data analytics software from IBM’s i2, now owned by the Canadian software company Harris, to profile suspects and their close connections.

A report from the University of Toronto’s Citizen Lab found that Honduras was also likely using Circles, a surveillance technology made in Israel that listens in on calls and monitors texts and location by “exploiting weaknesses in the global mobile phone system.” Researchers made this assessment following the discovery of two servers in Honduras using the technology, including one belonging to the country’s defense department. Citizen Lab also identified an operator in Honduras for the NSO Group’s notorious spyware Pegasus, which has been used to spy on journalists and activists in Mexico, El Salvador and Saudi Arabia. Previous reporting also revealed that Hernández’s government purchased software called Galileo, to hack emails and listen to phone calls, from the now-defunct Italian company Hacking Team.

Since 2017, the Directorate also has used phone extraction and analysis software from the Israeli company Cellebrite, which boasts having contracts with police, military and secret service agencies in more than 150 countries. Its main product, Universal Forensic Extraction Device, can bypass passwords and encryption to remotely access mobile phones and computers. It then extracts, analyzes and presents the data in a tidy report. 

The Directorate used Cellebrite for 939 extractions from 2017 to 2022. The police provided me with a long list of crimes they have investigated using Cellebrite technology, including homicide, drug trafficking, human trafficking, rape, fraud, vandalism and arson. Court records prove that the police have used Cellebrite in at least one case against a group of environmental activists, in which they were able to extract incoming and outgoing calls, photo albums, social media messages, emails and plenty else from the activists’ phones.

The ATIC also uses the Cellebrite software and has conducted 3,893 extractions since 2015. The office of Honduras’ secretary of defense confirmed it does not use Cellebrite.

This type of state surveillance “is often much more intrusive than you would think it would be,” said Tamir Israel, of Human Rights Watch. “There is also a chance for more serious repercussions to happen,” Israel said, referring to torture, state killings and imprisonment that have occurred immediately after surveillance or even years in the future in other countries. “That’s why it’s important to have very clear checks and balances around every step of this at every point in time.”

These companies are required under international human rights laws to act responsibly and conduct due diligence in their sales, but enough abuse has been exposed over the years to know this is not happening on an industry-wide level, Israel said. U.N. experts, alongside groups like Human Rights Watch, have called for a global moratorium on the use of these technologies until more robust due process mechanisms and data protection laws are in place. Any company selling these products “must have known of the government’s connections to the illegal drug trade, and if they didn’t, they had a responsibility to learn,” said University of California Irvine law professor David Kaye. “If the companies did more than just sell a product — for instance, if they then serviced it, helped the customer use it — they owe the victims and the world an explanation for what they knew and what steps, if any, they took to mitigate or prevent harms.”

In September 2021, Cellebrite announced the formation of an ethics and integrity committee to respond to these types of critiques. The company has come under scrutiny for its contracts with countries like Saudi Arabia and Myanmar and the recent sale of its technology to the Ugandan police.

In an email to Coda, Cellebrite Director of Public Relations Victor Cooper wrote that the company pursues “only those customers who we believe will act lawfully and not in a manner incompatible with privacy rights or human rights” and noted that Cellebrite does not sell to countries sanctioned by the U.S., EU, U.K. or Israeli governments, such as Belarus, China, Russia and Venezuela. Cooper declined to comment on Honduras specifically. Coda reached out to Palantir about its contracting practices, but the company did not respond to a request for comment.

Daniel Osorio, a tech consultant for the police, told me he had seen the police use Palantir, Cellebrite and other technologies, but that many officers didn’t seem to have much training. “I’ve thought, who taught them the ethics of this?” he told me. “Because ethics come first.”

And all this technology is expensive. It has likely cost Honduras millions of dollars — documents obtained through open records requests indicate that the police have spent at least $136,000 for Cellebrite each year. Outside sources indicate that authorities spent $150,000 per server per year for Palantir, at least $335,000 for Galileo, and likely upwards of one million dollars for Pegasus. Hernández closed his administration with more than 70 percent of Hondurans living in poverty.

This multi-million dollar digital surveillance structure was all in place by the time Juan Orlando Hernández was set to face the biggest challenge to his presidency: the 2017 reelection campaign.

IV. THE TARGETS

Hery Flores was a student when the presidential campaign kicked into full gear in 2017. That’s when Flores met Marcio Silva, who was part of the prominent protest group Movimiento Estudiantil Universitario. As part of the group, Flores and Silva both regularly spoke out about university policies like budget cuts. They also demonstrated against Hernández’s reelection campaign.

Months before the November 2017 elections, during a protest, police entered the university and beat Silva and others with batons and launched tear gas canisters. Silva and 19 others were arrested for allegedly damaging a university building. 

That’s when the surveillance started, Silva told me recently in Tegucigalpa. “I think they had already been following us and monitoring our social media,” he said. “But because 2017 was an election year, we became a problem for those who wanted to stay in power.”

Marcio Silva at the central courtyard of the UNAH campus where students would gather to organize for protests.

When Silva was in police custody, his cell phone was taken away, as is customary for anyone arrested in Honduras. But authorities cannot legally access phones or extract their data without a warrant signed by a judge. Court documents show no sign of a warrant being sought in Silva’s case. Nevertheless, “they hacked everything, all of my passwords,” Silva said.

I asked Osorio, the tech consultant, about this later on. “When they arrest people and take their phones for a long time, it’s because they are doing an excavation,” he said, referring to the process of extracting data. Cellebrite can do this kind of extraction, but there’s no smoking gun proving that this is how they got Silva’s data.

There was strong evidence that police had hacked into his social media accounts. Pornographic images and videos were posted to Silva’s Facebook account, in what he believes was an attempt to make him look “perverted” and discredit his activism. “It was part of a smear campaign,” Silva said.

Media outlets aligned with the government started publishing charts and diagrams on student leaders and the internal structure of the protest group. “There was information that astonished us with the level of accuracy,” said Silva. “The national ID numbers, the majors we studied, our school year, even our grades.”

Police sources explained that tools made by Palantir make it easy to build profiles and see connections like this. Technology like i2 also carries out similar data analytics functions, which can map connections, according to police officers familiar with the software. It uses cell phone records to analyze gang and criminal structures, as well as social movements. Other sources noted the possibility that some of the information was embellished or made up altogether, to paint a picture that served police purposes.

After he was released, Silva started receiving death threats via WhatsApp and anonymous Facebook accounts. “We’re going to kill you, we’re going to burn you, piece of shit terrorist,” they wrote. Some even slung racist messages at him, commenting on his dark complexion. The women in his organization received rape threats. Silva never got his phone back. 

Then came election night. As the results began to roll in, Hernández was trailing his opponent by almost five points. Suddenly, people’s internet connections mysteriously went dark. When connectivity was restored the next day, Hernández was in the lead. Suspicions of fraud led Hondurans to fill the streets.

Flores and Silva were among them. So was Raúl Álvarez, a former police officer who was 24 at the time. Months prior, he had been let go from the police force along with 200 other officers as part of the supposed “purification” process. His superiors never gave a reason. He was simply notified he had been removed through an “order from the executive branch.”

“They fired the good ones and left the bad ones there,” he said.

Álvarez was never formally accused of corruption. Making ends meet washing his neighbors’ cars and frustrated with the ruling government, he decided to join the thousands demanding accountability in what looked like a fraudulent election.

Among the throngs of demonstrators, he recognized some former police colleagues disguised as protesters, perhaps trying to infiltrate the mass movement. He pointed them out to others. 

“I became a target for them because of all the information that I have,” Álvarez said. 

Weeks later, Álvarez was arrested and accused of vandalizing the Marriott hotel in Tegucigalpa and another business during protests. He was put in pre-trial detention in a high-security prison. 

To prove their case, authorities used Cellebrite to access Álvarez’s phone and extract his data. Court records reviewed by Coda listed all 690 of his contacts, 47 separate SMS conversations, nearly 200 internet search histories and more. Some files he said he had previously deleted were even recovered by the technology.

Prosecutors presented photos of Álvarez at the protests that were extracted from his phone as evidence in his trial. He does not know what authorities did with the rest of the information or how many people had access to it.

Álvarez was never convicted. He was released after two years in a maximum-security prison before the amnesty law passed. In January 2023, a judge finally closed his case. He suffered an attack after his release that left him partially blind in one eye, and his attacker said he planned to kill him. He wants to leave the country.

“Thank God I was a prisoner and not a martyr,” he said. “But my life also matters, and I think the effort that I gave to this country is already large enough.” Knowing the police have so much information on him has made Álvarez feel even less safe than before. “I keep thinking that it’s better to ask for asylum in another country,” Álvarez said.

During the Hernández years, it was not just direct political opponents that became targets of Honduras’ surveillance regime. Lawyers and activists told me they saw similar trends among civic groups in other parts of the country. Nidia Castillo, a human rights lawyer in the southern state of Choluteca, explained how the previous government also went after environmental groups, which are sometimes seen as a threat to industry, and campesino (tenant farmer) organizations. “They need to have profiles and to know who are the leaders to initiate actions of criminalization, persecution and threats,” she said.

Activists with social movements in the Bajo Aguan region told me that their phones were regularly confiscated or tapped. Once when members of the campesino group COPA used code to thwart ubiquitous surveillance of their conversations, a misinterpretation of their conversation appeared in state-aligned media. Two COPA members told Coda that they had their cell phones confiscated by police, after they were questioned about the murder of their colleague. Authorities have not formally accused them of any crime, making the confiscation of their cell phones illegal. Four campesino leaders from this region were murdered in January 2023.

Christopher Castillo, a coordinator at the environmental rights group ARCAH, also became a target. His organization investigates and reports corrupt companies and individuals whose activities threaten Honduras’ natural resources. The organization has become skilled in uncovering information on corporate and government wrongdoing, making the state see them as a threat, he told me.

In Tegucigalpa, on March 29, 2021, about 15 activists from ARCAH protested outside a chicken processing plant they said was polluting the local river with its waste, turning the water brown and giving it a putrid odor. Around 10 a.m., police officers far outnumbering the activists arrived to break up the protest. At least one drone circled above. 

Seven were detained and arrested by the police on charges of “forced displacement,” punishable with up to nine years in prison. The charge is usually brought in cases of gang members who force residents to flee their homes.

Four of the activists had their phones taken away: Christopher Castillo, Jeffrey Alexander Suazo Girón, Victor Alfonso Hernández and Michael Aguilar. Coda confirmed through court records that the police used Cellebrite technology to extract extensive data from all four phones. This included WhatsApp messages, incoming and outgoing calls, photo albums, Facebook Messenger and Twitter messages, emails and memory archives.

By April 2022, the charges against the seven ARCAH activists were dropped when a judge ruled that there was not enough evidence of a crime for the case to continue. But they still haven’t gotten their phones back.

Christopher Castillo thinks he and his colleagues were targeted for a specific reason: their activism often uncovers information that threatens the interests of powerful companies and politicians. “The message is clear. What bothers us is your information,” he said.

All of these activists reported receiving death threats and said they feared that the police and security forces could harm them. “If you ask me who has this information and this technology, they aren’t the best people,” said Osorio, the tech consultant. “They can cause you harm.”

V. MEET THE NEW BOSS: XIOMARA CASTRO

Six months after Flores was detained by the police outside the gas station, he was tapped to join the administration of Xiomara Castro, who had prevailed in national elections. Castro’s December 2021 victory brought the Libre party into power for the first time.

Castro rose to political prominence after leading a protest movement in the wake of the coup. In her third presidential run, she promised to demilitarize public security and promote a community policing model. Against the odds, Hondurans decided to give her a chance.

Activists and student leaders make up a core of President Castro’s political base. In February 2022, the Castro government passed an amnesty law under which the judiciary threw out the cases of people persecuted for their political beliefs or for protesting the previous government. Flores had his case thrown out under the law. Silva is applying for his case to be overturned. Court documents we’ve received thus far do not explicitly mention the use of digital surveillance technologies against either Flores or Silva.

Hery Flores at his office in Tegucigalpa, where he has a government job as the Vice Minister of Youth.

Castro says she wants to strengthen the rule of law and turn the police — one of the primary institutions wielding these surveillance technologies — into an effective and honest institution. But after a little more than a year in office, her long list of campaign promises hangs heavily over the administration. And support among her base has begun to falter. “It’s true there has been a change of government,” said environmental activist German Chirinos. Nevertheless, he said, “The authorities are the same — the police officers, the people in the judicial system.” 

The security minister, police director and other high level police officials repeatedly declined my requests to interview them about surveillance and initiatives to fight corruption within the ranks.

Along with former president Hernández, Honduran police arrested 11 other people on extradition requests from the U.S. after he left office. The Castro government touted this as a major success. But apart from the “big fish” strategy, it’s unclear how the new government will eradicate corruption and the influence of the drug trade on politics and security institutions. Castro’s own government and family have come under scrutiny for potential drug trafficking ties. Zelaya, Castro’s husband, was accused of taking a bribe from drug traffickers in a New York court case. A promise to bring an international anti-corruption commission has stalled. And cocaine seizures actually declined during Castro’s first year in government.

Key laws that enable abusive surveillance, including the wiretapping law, remain. “Until they are overturned, we’re not going to accomplish anything,” said ARCAH activist Christopher Castillo. In November 2022, he was stopped by the police again outside his house, without cause. 

In early December 2022, Castro’s government declared a state of exception in certain gang-controlled neighborhoods in the country’s two largest cities in an attempt to fight extortion, which grants broad powers to security forces. At least four environmental leaders have been killed so far this year. UN human rights experts have called for an independent investigation into the killing of two activists in the north, who had previously been arbitrarily detained after opposing a mining project sanctioned by the Hernández government.

Maria Luisa Borjas, the former police officer, has been following the new government closely, particularly its security policy. “There are no clear policies to combat corruption, neither in the government in general, nor in matters of security, nor in the Armed Forces,” Borjas said. She criticized the amnesty law — which has become one of the Castro’s government’s most controversial policies — for providing impunity to corrupt officials instead of fulfilling its intended purpose of liberating political prisoners.

VI. ‘YOU DON’T WANT ME TO KNOW WHERE YOU LIVE’

When the police gave me an extensive tour of their investigative unit, agents offered sweeping details about their work, on everything from collecting evidence at a crime scene to deactivating explosives. Many emphasized the need to follow protocol and obtain warrants for searches.

It all made me wonder if the institution is finally achieving real reforms.

But at the end of my tour, my confidence evaporated. A police officer offered to give me a ride home. I politely declined, saying I didn’t want to inconvenience him. “You don’t want me to know where you live,” he said. “Even though I could find out anyway.”

He explained how based on my phone number he could find records from the phone company that document my address or even use GPS data from my phone to figure out where I spend most of my time.

“You couldn’t do that, because you would need a warrant,” I said, repeating what the police had gone through so much trouble to emphasize to me when they explained all the technology in their power.

He responded that he could just send the woman who operates the computer to go buy a soda, and he could log into the computer without her knowing.

“But that would be corrupt,” I said. “And you’re not corrupt, right?”

Of course not, he said. He had just been joking.


This reporting is sponsored by the Bruno Foundation, set up by journalist and writer Martin Walker. Walker is a celebrated international reporter, historian, and author of the popular Bruno detective series. Bruno’s eponymous protagonist has a distinct awareness of justice, intrigue, and tenacity – traits the Bruno Fellowship encourages.

The post How surveillance tech helped protect power — and the drug trade — in Honduras appeared first on Coda Story.

]]>
For Italy’s right wing, cash is still king https://www.codastory.com/authoritarian-tech/meloni-italy-cashless-future/ Fri, 13 Jan 2023 13:49:41 +0000 https://www.codastory.com/?p=39187 Prime Minister Giorgia Meloni wants Italians to keep using cash. As the EU moves toward a cashless future, she’s become an unlikely ally for small businesses and privacy advocates

The post For Italy’s right wing, cash is still king appeared first on Coda Story.

]]>
Produce your Apple Pay or debit card to pay for an espresso in Rome, and you’re often met with a pained expression. “Solo contanti,” they’ll plead — cash only. 

Unlike, say, Scandinavia, the U.K. and the Netherlands, where many citizens have stopped carrying cash altogether, in Italy having a few euros in your pocket is a part of daily life. Italians, alongside Germans and Austrians, are among the most “cash prone” in Europe. Cash is how you pay for your morning cup of coffee, for fruit and vegetables at the grocer, for taxis, snacks and gelato. A survey in 2019 showed that 86% of transactions at the point of sale were in cash.

In 2020, the government introduced a new “Christmas cashback” scheme to try to encourage card payments, by offering people rewards and rebates. For every card payment people made up to 150 euros, the government would refund 10%. But right before the holidays this year, the country’s new hard-right prime minister Giorgia Meloni announced a budget that seemed to take Italy backwards, just as the rest of Europe, and indeed Italians, were embracing card payments — particularly contactless — in ever greater numbers.

“Cash must be king,” Meloni told Italians. She proposed that in 2023 business owners would be allowed to refuse digital payments for transactions below 60 euros without a fine. On top of that, she would raise the current limit for cash payments from 2,000 euros to 5,000 euros. 

The European Commission in Brussels pushed back immediately. With one of the largest black markets and shadow economies in Europe, Italy’s former government pledged to Brussels it would reboot its flailing economy, while fighting tax evasion under EU guidelines. On this condition, the Commission gave Italy 220 billion euros (about $238 billion) in coronavirus recovery funds — by far the largest share in Europe. 

But Brussels said Meloni’s plans went against Rome’s pledges to fight tax evasion, and Meloni was forced to scrap her plan to allow businesses to refuse card payments for bills below 60 euros. 

Meloni still plans to push through her pledge to raise the overall legal limit for cash transactions to up to 5,000 euros. This is well below the Council of the EU’s proposed bloc-wide limit of 10,000 euros but above Italy’s previous pledges to reduce the limit to 1,000 euros by the start of the year. 

“The world is definitely moving towards cashless society. Definitely,” Spiros Margaris, a Swiss fintech advisor and “futurist” venture capital influencer, told me on Zoom. But, he added, “the cashless society is both a curse and a blessing.”

A cashless society effectively spells a new era for shadow economies — it makes it more difficult for people to evade taxes and makes life harder for criminals, traffickers and those in the drug trade. Denmark just recorded its first year in history without a single bank robbery and it has an increasingly cashless society to thank. 

But there are significant drawbacks. A natural disaster or war can quickly cut off access to electricity and, with it, our ability to pay for things. During Hurricane Sandy in New York City, New Yorkers walked miles uptown to withdraw cash, after electronic payments became impossible in many parts of the deluged city. In the aftermath of Hurricane Maria in Puerto Rico in 2017, the cash economy reigned as the power grid went out for months on end.

And while a cashless economy means more cash transactions are forced to take place above the table, anti-surveillance advocates say a cashless future would allow governments and banks to wield more power than ever.

“The problem with a cashless society is that it is a surveillance society. And not only can governments, banks and tech companies monitor what you have earned and spent in a cashless world, they can preemptively control it too,” wrote Silkie Carlo, director of the U.K. privacy group Big Brother Watch, in an op-ed in June. The writer Brett Scott, whose book, “Cloudmoney,” rails against the advent of a cashless society, says a cashless world is “a world where even the tiniest of payments will have to travel via powerful financial institutions, which leaves us exposed to their surveillance and control — and also their incompetence.” 

With her November declaration that “cash must be king,” Meloni became a kind of antihero for the movement pushing back against a cashless future. Meloni said she was protecting poor people and small businesses, standing up against a world in which the elderly and homeless are locked out of the digital economy. But her critics argued that she was really protecting Italy’s enormous dark money industry. 

“Meloni loves cash that is essentially untraceable,” said Mirella Castigli, an Italian author who has written several books on digital privacy. “But this seems not to be a right to privacy issue, but another way to give a wink to tax evaders. It’s anachronistic to say people should go back to where we were years ago.”

Italy’s black market is one of the largest in Europe, worth a sizable chunk of the country’s gross domestic product. And Meloni’s proposed upper limit on cash payments, says sociologist Marco Omizzolo, would “make things worse for migrant workers, it just allows for greater exploitation.” The higher cash limit, he explained, would mean traffickers could keep their transactions under the table and pay people well below the minimum wage with impunity.

In divergence with Meloni, many leaders around the world have vowed to crack down on their country’s dark economies by imposing cash limits — or withdrawing cash altogether from circulation. In 2016, Indian Prime Minister Narendra Modi announced that higher-value 500 and 1,000 rupee notes — 86% of the money supply — would be removed from circulation. The move was meant to tackle corruption, with one minister describing it as a “surgical strike” against black money. Modi, meanwhile, told the millions of people who rely on cash in India that demonetization would be “the chance for you to enter the digital world.” 

The digital rights group Privacy International said at the time that the move had another aim, “linking financial transactions to identity.” Six years later “cashless India” remains a flagship Indian government program, which the International Monetary Fund has praised. The opposition, though, has pointed out that the move failed to eliminate black money and led to job losses and much economic hardship for the rural poor in particular. In a recent survey, over 75% of respondents said they still used cash to buy groceries, eat out and pay for home repairs, deliveries and other services.  

While the jury is out on the success of Modi’s demonetization, it remains true that card payments are rising steeply as a proportion of transactions. Covid restrictions have also helped turbocharge the world’s progression towards a cashless future, as contactless payments were encouraged as a way of stopping virus transmission — a claim that had little in the way of medical or scientific backing.  

“People were much more open to digital solutions and digital transformations,”  said Margaris, the Swiss fintech advisor. “The adaptation [to a cashless society] was accelerated, and now people just have to digest it.” In Italy, too, Meloni will likely bow to the inevitable. But in the meantime, she has won brownie points with small business owners and those who see a cashless society as evidence of the concentration of power in the hands of government, banks and big tech.  

The post For Italy’s right wing, cash is still king appeared first on Coda Story.

]]>
‘Undercurrents: Tech, Tyrants and Us,’ a new podcast series https://www.codastory.com/authoritarian-tech/undercurrents-podcast-s1/ Thu, 05 Jan 2023 14:42:35 +0000 https://www.codastory.com/?p=38925 In partnership with Audible, Coda presents eight stories from around the world of people caught up in the struggle between tech, democracy and dictatorship

The post ‘Undercurrents: Tech, Tyrants and Us,’ a new podcast series appeared first on Coda Story.

]]>
Smartphones, social media, and surveillance tech are sold to us as ways to build a safer, more connected and convenient world. Many of us were hopeful this tech would also lead to a more open, more free society.

But with authoritarianism seemingly on the rise across the world, did we get it wrong? Maybe tech is just making life easier for the tyrants.

In eight episodes, reported from around the world, journalist Natalia Antelava and the team from Coda Story focus on the stories of people caught up in the struggle between tech, democracy and dictatorship, and ask whether tech is doing more for dictators than it is for democracy.

Listen to Coda’s collaboration with Audible on the original series Undercurrents: Tech, Tyrants and Us.

Episode 1: Escaping China’s Surveillance Net

A young man is detained by police in China’s Xinjiang region, and stumbles on an unlikely means of escape. He double-crosses his interrogators and runs as far as he can – all the way to the frozen Arctic. But will he ever really be free from China’s surveillance web? Even in his new home in northern Norway, the eyes of the Chinese state are never far away.

Episode 2: Tech and the Taliban

Smartphones and mainstream apps power a volunteer team racing against the clock to help Afghans leave the country before the Taliban takes over. For those left behind, those who opposed the Taliban, how do they stay safe? Because this Taliban regime is very different from the one that ruled 20 years ago. This one uses social media and biometric databases. 

Episode 3: Russia’s Leaky Databases

In 2021, Russia’s main opposition is using digital technology to challenge Putin’s government as never before. It’s building databases of supporters to help further the cause, just like political organizations the world over. But then, those supporters start getting visits from the police.

Episode 4: Thailand: The Genie Escapes

In Thailand there’s one subject that’s been taboo for decades: the Thai Royal Family. So how do the authorities react when the topic appears on social media, and the genie is out of the bottle? And what does that have to do with the disappearance of a social media activist in neighboring Cambodia?

Episode 5: Silencing India’s Critics

Police say they’ve uncovered a plot that implicates some of India’s most distinguished lawyers, intellectuals and activists in a conspiracy to bring down the government. Central to the case are letters they’ve found on a laptop belonging to one of the suspects. But how – and why – did the letters get onto that laptop? When US-based forensic investigators take a look they’re shocked at what they discover.  

Episode 6: Telegram vs The Dictator 

In 2020, the President of Belarus has his back to the wall. Hundreds of thousands of protestors are on the streets in demonstrations that are amplified and coordinated via a messaging app – Telegram. The regime shuts off the internet, but even that can’t stop the ‘Telegram revolution’. So what’s the next move for Europe’s last dictator?

Episode 7: The Data Trap

When US sheriff’s deputies keep turning up at his house, a father in Florida can’t figure out why they’re so interested in his teenage son. Before long, the whole family is caught up in the local sheriff’s ‘data driven policing’ program. 

Episode 8: The Border Industrial Complex

Migrants from all over the world gather in the French port of Calais. It’s the last stop on a perilous journey. Their destination, the U.K., is on the horizon. They’ve used their smartphones to navigate here, to stay in touch with families, and negotiate with smugglers. But now they’re up against millions of dollars worth of security and surveillance infrastructure, and they have a decision to make.

The post ‘Undercurrents: Tech, Tyrants and Us,’ a new podcast series appeared first on Coda Story.

]]>
Can the world’s de facto tech regulator really rein in AI? https://www.codastory.com/authoritarian-tech/ai-act-europe/ Tue, 03 Jan 2023 13:54:50 +0000 https://www.codastory.com/?p=38885 AI software is advancing much faster than the law. The European Union is working to catch up

The post Can the world’s de facto tech regulator really rein in AI? appeared first on Coda Story.

]]>
Artificial intelligence is creeping into every aspect of our lives. AI-powered software is triaging hospital patients to determine who gets which treatment, deciding whether an asylum seeker is lying or telling the truth in their application and even conjuring up weird conceits for sitcoms. Just lately, these kinds of tools have been helping killer robots select their targets in the war in Ukraine. AI systems have been proven to carry systemic biases again and again, but their increasing centrality to the way we live makes those debates even more urgent.  

In typical tech fashion, AI-driven tools are advancing much faster than the laws that could theoretically govern them. But the European Union, the world’s de facto tech watchdog, is working to catch up, with plans to finalize the billboard AI Act this year. 

The use of AI in surveillance and monitoring technology is one of the hot button issues that is bedeviling ongoing negotiations. Software used by law enforcement and border agencies is increasingly reliant on things like facial recognition and social media scraping tools that amass vast stores of people’s data and use this information to make decisions about whether or not they should be allowed to cross a border or how long they must remain incarcerated. 

The EU’s draft regulation is premised on the fact that systems like these can present significant risks to people’s rights and well-being. This is especially true when they’re built by private companies that like to keep their code under lock and key.

The AI Act aims to establish a framework for assessing the relative riskiness of different kinds of AI systems, dividing them into four tiers: unacceptable risk products, such as China-style social credit scores, which would be banned outright; high-risk tools like welfare subsidy systems and surveillance tech software; limited risk systems like chatbots; and minimal risk systems such as email spam filters.

But it has some surprising omissions. Dutch parliamentarian Kim Van Sparrentak, who represents the Green Party, was quick to note that the European Council has tried to create carve-outs that would allow law enforcement and immigration agencies to keep using a wide range of these tools, despite their proven risks. In early December, more than 160 civil society organizations issued a statement expressing concern that the law doesn’t account for AI use at the border and unfairly impacts those already on the margins of society, such as refugees and asylum seekers.

“The risk is that we create a world where we continue to believe in the Kool-Aid of AI, and won’t have the right system in place to make sure AI doesn’t inflict [harm] on our fundamental rights,” said Van Sparrentak. 

The AI Act may also run into enforcement challenges. The regulation will apply mainly to companies or other entities developing and designing AI systems — not to public authorities or other institutions that use them. For example, a facial recognition system could have vastly different implications depending on whether it’s used in a consumer context (i.e., to recognize your face on Instagram) or at a border crossing to scan people’s faces as they enter a country. 

“We are arguing that a lot of the potential risks or adverse impacts of AI systems depend on the context of use,” said Karolina Iwanska, a digital civic space advisor at the European Center for Not-for-Profit Law in the Hague. “That level of risk seems different in both of these circumstances, but the AI Act primarily targets the developers of AI systems and doesn’t pay enough attention to how the systems are actually going to be used,” she told me.

Although there has been plenty of discussion of how the draft regulation will — or will not — protect people’s rights, this is only part of the picture. According to Michael Veale, a University College London professor who specializes in digital rights, “the AI Act has to be understood for what it is: a legislative and market instrument.” The reason the European Commission is acting here, said Veale, is that member states have made different, varying laws at a national level around AI, which create barriers to trade in the internal market. “The concern is they won’t be able to trade AI systems because there’ll be different rules per member state,” said Veale.

Europe’s action to develop rules around AI comes with the aim of developing a “harmonized market” around the trading of AI systems. “That’s the fundamental logic of the AI Act, above all else,” Veale told me. 

Under the current draft of the Act, high-risk tools include AI used in education, employment or law enforcement. On high-risk AI, it has set requirements concerning design, labeling and documentation for any new piece of technology. On everything else — deemed non-high-risk systems — the Act forbids member states from regulating the systems at all. “That allows non-high-risk systems to move freely across the Union and be traded,” said Veale.

But Veale thinks that goal is naive. “When we say we trade AI systems, that ignores a lot of the practical reality around how business models of AI work,” he said. Nevertheless, it’s the underpinning principle of what we’re seeing. “It’s a legislative idea,” he said. “It’s not, ‘Let’s make the best human rights in the world.’ It’s, ‘Let’s remove barriers to trade for the technology industry.’”

The regulation does not establish an independent entity that will vet or evaluate these technologies — instead, companies will be expected to report on their activities in good faith. A quick look at Silicon Valley gives many people reason to believe this won’t cut it. Under the current draft, “you don’t even have to get a third-party private body to tick off your documentation,” said Veale. “You can just self-certify to a standard for the legislation, and pinky promise you did it correctly.”

Karolina Iwanska was equally worried about the certification requirements — particularly when it comes to tools in the high-risk category. The regulation will require providers to develop a risk management system and ensure their training data is relevant, representative and free of bias, an achilles heel for such tools. There’s now a decade of research on the topic, from Latanya Sweeney’s seminal 2013 study on racism in Google’s search algorithm  right up to the present day, when ChatGPT, the latest AI-powered chatbot, indulges in casual racism when opining about the value of different people’s lives based on their ethnicity.  AI tends to reflect our societies like a mirror. If it’s trained on our unjust reality, or an unrepresentative data sample, it will harm some people worse than others.

So far, experts worry that the regulation will not sufficiently acknowledge how complex these technologies are, and how difficult it can be to change them once they are up and running. “There is an assumption that you can fix the system,” said Iwanska, “but that ignores obligations on authorities that are actually going to be deploying those systems. There is no consideration of systemic biases, for example.” It’s one thing to prevent any biases being coded into the system or to ensure that systems are built using data that is representative of society and free of influence, but AI is always reflective of its creators — and that’s mostly affluent white men. 

Iwanska also says that drafters have offered little more than lip service to the real need for transparency or accountability around these tools. At present, the AI Act will require technology providers to include the intended purpose for their system, who the developer is, their contact details and their certificate number. But, “there’s nothing on the substance of how the system operates, what sort of criteria it uses, how it’s supposed to perform and so on. That’s a big fault that we feel will undermine public scrutiny of what sort of systems are developed,” she said.

The self-certification model borrows from other areas that Europe regulates, but few are as important to society as AI governance. Veale too was concerned about the pitfalls of this approach. “The rules are for fundamental rights around things like human oversight, or bias, or accuracy,” he said. “Not only are these things going to be self-certified by companies using this to try and lower the burdens on them, but they’re also going to be made up and elaborated in a completely closed-door, anti-democratic process that’s ongoing right now — even before the law is passed.”

Of course, the law is still being hashed out — it’s impossible to know for certain how it might change the way AI is used by public agencies. “The definitive answer will come in a couple of months, because the legislative process is still ongoing,” said Iwanska. She isn’t yet sure what impact the process will have. “[We] can expect that this proposal will change a lot,” she said. “But it’s not clear yet in which direction — so whether it will improve or be undermined.”

Alex Engler, a fellow in governance studies at the Brookings Institution, believes that where Europe leads, the world will follow. Because the European Union is a 450-million-strong market of consumers, and because it has in recent years managed to bring big tech partly to heel through its regulatory moves, he feels confident that the EU’s AI Act will shift how manufacturers of such systems operate worldwide. We’re already seeing a Europe-wide backlash against AI-powered surveillance systems that Engler expects will be bolstered by marketwide regulation from the EU. In fact, the European Data Protection Supervisor has welcomed plans to ban military-grade spyware of the type used to monitor politicians and journalists, as part of a proposed Media Freedom Act. And in November 2022, Italy’s data protection agency banned the use of facial recognition systems and other intrusive biometrics analysis until the end of 2023, or until laws covering its use are adopted, whichever comes sooner.

The EU’s legislation is part of a broader movement to try and draw boundaries around the development and use of AI systems. In the United States, the White House Office of Science and Technology has put forward a blueprint for an AI Bill of Rights following a year-long consultation with the public and experts, as well as industry. That followed the draft Algorithmic Accountability Act, which was tabled in Congress in March 2022. And in July 2022, plans for the American Data and Privacy Protection Act moved out of the committee stage with rare bipartisan support.

However, Americans shouldn’t hold their breath for anything to change soon, particularly with a new Congress convening this year. “In the U.S., you’re much less likely to see legislation,” said Engler. “There’s no evidence that anything like the Algorithmic Accountability Act is gaining momentum, and there’s a lot of skepticism around the Data and Privacy Protection Act,” he added.

In part, that’s because of the challenge of getting your arms around the morass of complications that legislating AI throws up. This is a global problem. “I don’t think you can write down a single set of rules that will apply to all algorithms,” said Engler. “Can we regulate AI? If you’re expecting a single law to come out that solves the problem, then no.” Yet he does think that governments can do better than they currently are, by adapting themselves holistically to emerging software in general. “That’s what we have to do — and in some ways that’s more daunting and less splashy, right?” Engler said. “It’s a whole-of-government change towards a deeper understanding of technology.”

Despite both the political and technological challenges that policymakers have had to grapple with in order to find consensus on the regulation, Dutch parliamentarian Van Sparrentak thinks the effort is worth it — not least because not acting allows AI’s use to grow unchecked. “What is most important is, when AI comes in place, people will never stand empty-handed anymore vis-à-vis a computer,” she said. “They’ll have an idea of why the system made a certain decision about their lives, and they’ll get transparency over that.”

The post Can the world’s de facto tech regulator really rein in AI? appeared first on Coda Story.

]]>
The year in authoritarian tech trends https://www.codastory.com/authoritarian-tech/2022-authoritarian-tech-trends/ Wed, 28 Dec 2022 09:00:00 +0000 https://www.codastory.com/?p=38756 A round-up of Coda’s top authoritarian tech stories that were stranger than fiction, from actual killer robots to the post-Roe abortion surveillance dragnet

The post The year in authoritarian tech trends appeared first on Coda Story.

]]>
From murderous machines to the looming abortion surveillance dragnet, the technology stories we covered in 2022 were enough to give even the most seasoned science fiction writers a run for their money. Here were some of our top hits:

The rise of the killer robot

Forget fantasyland Westworld machine murderers. Real-life lethal robots are now fighting their way into warzones and police departments worldwide. 

Coda’s Ilya Gridneff explored the rollout of a new generation of autonomous machines on Ukraine’s battlefields. Naval drones and unmanned, machine gun-equipped ground vehicles are “poised to upend modern warfare,” Gridneff wrote. The emergence of these “killer robot” devices raises all sorts of terrifying questions about the ever-blurring boundary between machines and humans and the existential risk of ceding too much control from the latter to the former. 

They’ve made it to California, too. Lawmakers in San Francisco, one of several U.S. cities doubling down on police surveillance in response to concerns about crime, recently faced severe backlash after nearly approving a measure that would have let police use robots to kill. The neighboring city of Oakland also explored (and then scrapped) a plan to arm police robots with guns. 

America’s post-Roe abortion surveillance matrix

When the U.S. Supreme Court overturned Roe v. Wade, the landmark case establishing a constitutional right to abortion, privacy experts were quick to point out the dangers of the decision in the digital age. As we wrote after a draft opinion was leaked in May, people’s search histories, text messages, location data, social media activity, purchasing records and use of reproductive health phone apps could all be used as evidence in legal cases against those who seek the procedure in states where the procedure is outlawed. 

“As soon as abortion becomes criminalized, then any sort of digital trace that people leave online at any stage of their journey could be evidence that might be used against them,” Nikolas Guggenberger, now-former executive director of Yale’s Information Society Project, explained. And that’s nothing to say of the incredibly messy universe of questions it might raise for speech on social media platforms. Already, companies have been accused of suppressing content about abortion and abortion-inducing drugs.

The spy in your pocket

It’s impossible to talk about authoritarian tech trends without talking about spyware. There is a huge global appetite for this technology by governments of all stripes. We’ve covered the topic extensively in our Authoritarian Tech newsletter — subscribe if you haven’t yet! — and the updates are coming in so quickly that it’s hard to keep track. In California, WhatsApp and Apple have sued the Israeli spyware firm NSO Group, and a group of journalists from the Salvadoran investigative newsroom El Faro are also taking NSO to court for building software that infected reporters’ phones and tracked their every move. 

For journalists targeted with spyware, the personal and professional harm can be severe and long-lasting. Over the summer, we covered the story of Togolese reporters who appeared on a leaked list of 50,000 phone numbers that NSO clients targeted for surveillance. A year after the revelations, the threat of being infected with spyware continues to haunt them.

Engineering a perfect society – through mass surveillance

The scope of mass surveillance in China is so widespread that it’s difficult to truly wrap your mind around it. Coda reporter Liam Scott gave us a primer when he interviewed Wall Street Journal journalists Liza Lin and Josh Chin about their recent book, “Surveillance State: Inside China’s Quest to Launch a New Era of Social Control,” which describes the country’s descent into mass surveillance as a tool of authoritarian social control. 

The magnitude of surveillance in Xinjiang, where the government has been accused of carrying out a genocide against Uyghur Muslims, is “truly totalitarian,” reporter Chin explained, with the goal of completely “remolding” the individuals it targets. This includes a system of biometric data collection, facial recognition technology, so-called “Big Brother” programs and advanced artificial intelligence that authorities have imposed on the population to exert “total control.” Outside of Xinjiang, residents have faced extreme surveillance under Beijing’s draconian “zero Covid” policy, which reporter Isobel Cockerell has explored at length in her excellent Infodemic newsletter. 

The building blocks of the surveillance nightmare unleashed in Xinjiang and beyond, however, can be found in the U.S., home to companies that happily supplied their technologies to the Chinese government as it constructed its panopticon. These tech companies, Chin explained, “midwifed the Chinese surveillance state from its most embryonic state in the early 2000s, and they continue to nurture it with capital and components.” China’s end goal with this tech, he believes, is to build a “perfectly engineered” society. If that’s not dystopian nightmare fodder, I’m not sure what is.

As we struggle to find a silver lining in all this, it may be time to take a step back and reconsider tried-and-true methods of communication. From protester signs in China to print-and-post samizdat networks in Belarus, our stories in 2022 also showed the enduring power of pen and paper. Enjoy your reading.

The post The year in authoritarian tech trends appeared first on Coda Story.

]]>
The year in cross-border repression campaigns https://www.codastory.com/authoritarian-tech/2022-crossborder-repression-campaigns/ Tue, 27 Dec 2022 13:58:24 +0000 https://www.codastory.com/?p=38724 Regimes are becoming bolder in targeting dissidents abroad. Here are some of the worst cases from 2022

The post The year in cross-border repression campaigns appeared first on Coda Story.

]]>
In 2022, more governments unleashed harassment and violence on dissidents who had found refuge — and presumably safety — in other countries. This phenomenon is known under the umbrella term “transnational repression,” with regimes deploying just about any asset at their disposal to silence critics and curtail information sources from abroad. This year marked an escalation — many countries, big and small, are copying the transnational repression tactics honed by the most brutal, unconstrained regimes. Here are some of the worst transnational repression pioneers of 2022.

China

China continued to be the most dangerous cross-border offender. As part of its highly sophisticated transnational repression campaign, the regime issued hundreds of lnterpol red notices — requests to police around the world to detain and send suspects back to China. In April, the Chinese government tried to force back four members of the Uyghur minority, who have been targeted heavily within and outside China, from Saudi Arabia. Among the four was a 13-year-old girl who, along with her mother, risks being sent to a detention center. Following an outcry from human rights groups, the deportation has been delayed. 

Under the banner of an anti-corruption program called Sky Net, the Chinese state has also ramped up efforts to repatriate Chinese nationals it accuses of corruption. The program has seen thousands targeted in the last few years, including the Chinese businessman Ma Chao, a member of the persecuted Falun Gong movement currently living in Cyprus. At the start of the year, members of his family in China were arrested to increase pressure on him to return. Just one month later, an Interpol notice was issued against his wife. 

Even within the U.S., traditionally seen as the ultimate safe haven for those escaping persecution abroad, China has ramped up its efforts to target dissidents. In October, the FBI charged seven individuals with conducting a campaign to surveil and coerce U.S. residents to return to China. In response to this concerning trend, a group of Democratic congressmen have introduced a bill that seeks to codify transnational repression as a crime under U.S. law.

Turkey

Turkey is one of the biggest transnational repression actors. High-profile attempts to return Kurds back to Turkey were a regular occurrence in 2022. Turkey has been able to leverage Russia’s war in Ukraine, demanding that Finland and Sweden commit to more proactively returning dissident Kurds to Turkey in exchange for Turkey’s support for their NATO membership bids. Turkey’s government has provided a list of dozens of people it wants repatriated. It continued to tap informal networks to attack and threaten journalists living abroad. Those targeted in Sweden include the Turkish-Kurdish journalist Ahmet Donmez, who, in March of this year, was attacked outside his home.

Iran

Over the years, the Iranian regime has used tactics such as assassinations, renditions and digital intimidation to target Iranian citizens in countries in Europe, the Middle East and North America, according to Freedom House. During the past three months of cascading protests across Iran, there has been renewed global interest in the dangers facing Iranian activists living at home and abroad.

In October, masked men attacked anti-government protestors outside the Iranian embassy in Berlin, leaving several injured. The British police recently warned two British-Iranian journalists and their families that they faced an increased “credible” threat from Iranian state security forces. The head of the U.K.’s domestic spy network, MI5, used his annual threat update to warn of Iran’s ambitions to “kidnap or even kill British or U.K.-based individuals perceived as enemies of the regime.” He said that there had been at least 10 such potential threats since January 2022.

Saudi Arabia

Since U.S.-based Saudi journalist Jamal Khashoggi was murdered in 2018 inside the Saudi embassy in Turkey, Saudi Crown Prince Mohammed bin Salman has been under a measure of diplomatic pressure. That has not stopped him from expanding the Saudi government’s transnational repression efforts. In August, the same month that President Biden met with the prince, three people were sentenced in Saudi Arabia after being surveilled while abroad. One was a 34-year-old mother who had tweeted about the Kingdom while in the U.K. 

It was also in August that a former employee of Twitter was convicted in the U.S. for using his access to Twitter’s data to spy for the Saudi regime. Last week, a U.S. judge dismissed a lawsuit against bin Salman that sought to hold him accountable for Khashoggi’s murder. The judge said that, while he felt uneasy about it, his hands were tied because the Biden administration had made a recommendation to give the Saudi leader political immunity. Having cemented its position as one of the worst transnational aggressors of 2022, the Biden administration’s policy is likely to provide wiggle room for the Saudi regime in 2023.

The post The year in cross-border repression campaigns appeared first on Coda Story.

]]>
In South Korea, women are fighting to end digital sex crimes https://www.codastory.com/authoritarian-tech/molka-digital-sex-crimes-south-korea/ Tue, 20 Dec 2022 16:46:48 +0000 https://www.codastory.com/?p=36618 Amid South Korea's culture of surveillance, students, lawyers and bathroom inspectors are working to eradicate spy cameras

The post In South Korea, women are fighting to end digital sex crimes appeared first on Coda Story.

]]>
Seo-yeon Park was lying beside her partner in a motel room near Sinchon, a lively neighborhood in the South Korean capital Seoul, when she was stirred awake by something moving near the foot of her bed.

A young man was standing over her, his face hidden behind a smartphone. He moved the phone from one hand to the other, readying a new angle as Seo-yeon’s partner slept at her side. Seo-yeon leapt up, and the intruder ran off. She chased him out of the motel into the streets, but he was too fast, disappearing down a sidestreet.

She figured he had picked the lock or gotten in some other way. “I was very angry because my wallet was there and my money was there, too,” Seo-yeon told me. But he didn’t want her money. All he took was her photo.

She rushed to the motel owner, urging him to call the police and asking if she could look at closed-circuit surveillance camera footage from the motel manager’s office. But the owner offered little help, telling her there was no such footage. She later learned that he’d lied to her and shared the video from the incident with the police. But the response was telling.

At only 17, Seo-yeon had reason to believe that she was the target of a digital sex crime and that the man would publish the photo of her, asleep, on one of the many thousands of sites that publish illegal photographs and videos of women. Few institutions were available to assist Seo-yeon. No cameras, no government officials and no law enforcement agency offered much help, even though incidents and attacks like this were becoming more commonplace. Three months later, the intruder was arrested and sentenced, but because he was a teenager, he was released on probation without time served.

Seo-yeon Park at home in Seoul. Photo by Jeong-mee Yoon.

For many young people in Korea, this story will sound familiar. Despite years of public outrage and legislative efforts to curb digital sex crimes, the country remains home to a profitable industry that exploits non-consensual images of women, many of them underage, and even coerces them into sexual acts that are filmed and distributed online. This type of covert filming even has its own name: “molka” in Korean, meaning mole camera, referring to both the camera and the footage.

몰카

[moːɾkʰaː]

In 2018, a man was found to possess 20,000 illegally-captured videos when he was arrested for installing spy cameras in motel rooms. The country’s now-former president, Moon Jae-in, soon thereafter acknowledged that illegal spy cameras had become “part of daily life.” That same year, thousands took to the streets, demanding legislative action on molka crimes as part of the global #MeToo movement. But today, stories about camera installations for illegal filming still make headlines weekly.

Some of Seo-yeon’s friends soon became targets of digital sex crimes too, their intimate images leaked online by strangers with pinhole cameras lurking in bathrooms or subway stations or motel rooms. Most often those images and videos were taken by strangers. Other times they were distributed across social media by embittered former partners. Seo-yeon herself never found out what happened to the photos taken of her. She did not want to know.

Instead, she wanted to find a way to stop these crimes from happening. Seo-yeon formed a group called Digital Sex Crimes Out, an organization that, from 2017 through early 2022, sought harsher laws against illegal filming and digital sex crimes in South Korea. She went by the nom de guerre Ha Yena for her activist work educating the public and law enforcement about the real-world consequences of those digital crimes: they endangered children, triggered stalking incidents and provoked immense psychological harm. Sometimes they ended in suicide.

As Ha Yena, she became part of a small but significant network of people in South Korea who are fighting to prevent digital sex crimes, sometimes at the expense of enacting questionable privacy laws. Well into the era of #MeToo, Seo-yeon and her contemporaries found themselves at a crossroads between privacy protection and crime prevention, echoing the many battles that have played out as more governments around the world introduce legislation meant to curb online crime.

Korea is a society of advanced technology — it boasts some of the most robust internet infrastructure in the world — but it is also a place where custom and tradition have a powerful influence on social norms and public policy. Technology and its uses continually outpace political and social reforms.

South Korea’s highly digitized society and lightning-fast internet speeds make it easy to circulate illicit footage. Once a file is on the internet, it can be difficult, if not impossible, to remove it once and for all. In one criminal case, illegal videos and photos were posted online and accessible for a monthly subscription fee. Molka is not just appealing for its salacious content but also for its profitability. Two estimates suggest nonconsensual videos shared online can fetch between $1,667 to $4,167 per gigabyte of footage, roughly an hour and a half of recordings.

According to the Seoul Metropolitan Government, there were at least 75,431 closed-circuit television cameras operating in the city as of December 2020, about one camera for every 132 residents. The country has a legal framework for protecting identifying information about individuals, but there are significant exceptions that allow law enforcement and other agencies to keep relatively close watch on people of interest. There is an atmosphere of routineness around surveillance. People seem to accept it as a part of daily life, a necessity for the relative security it ensures against violent crime and robbery and the contact-tracing abilities, which aided in South Korea’s Covid-19 response.

But if surveillance seems ever present and acceptable in Seoul, a cross-cutting culture of privacy also prevails: Vehicle windows are typically tinted for UV protection but also privacy, and rare is the invited house guest, no matter the intimacy of one’s relationship to a family. The illusion of control over one’s personal domain is routinely undercut by those thousands of closed-circuit security cameras.

And although fines are often levied against those who distribute or are caught with molka footage, they seem to rarely dissuade further crimes, setting an entire country on edge. Parents who allow their daughters to live outside of the family home before marriage, a rarity in traditional Korean families, tell their children to get apartments on top floors to avoid being videoed through first-floor windows or hallway cameras.

In 1997, the South Korean department store chain Hyundai (of motor vehicle renown) installed dozens of cameras in the bathrooms of its buildings in Seoul, after executives cited incidents of thieves rushing into restrooms with merchandise to hide in handbags. Public criticism was swift, and the cameras came down. But soon, the use of cameras across the country boomed. Electronics were made cheap and easily available in shops and stalls around Seoul and other major cities. And by the early 2000s, most South Koreans were carrying that same equipment — mobile phones — in their pockets.

The push-and-pull between privacy and security is nothing new. As with any technology, some cameras were installed with legitimate intentions — to monitor private property, oversee patients at nursing homes or monitor babies as they slept — while others had more dastardly uses. Cameras could be used for spying on employees in break rooms and bathrooms, or given as gifts, in the form of a hidden camera alarm clock, to an unsuspecting colleague who could then be tracked.

In 2010, national police data shows between 1,100 and 1,400 spy camera crimes were committed. By 2018, that number grew to 6,400. Of the 16,201 people arrested between 2012 and 2017 for making illegal recordings, 98% were men, while 84% of people recorded during that period were women. Prominent cultural figures, including K-pop stars, were accused and convicted of trafficking in such footage around this time.

The Digital Sex Crime Victim Support Center, established in April 2018, helps targets of digital sex crimes by deleting videos and providing additional support for criminal and civil investigations, medical care and legal assistance. Of those who sought the center’s help between 2018 and 2021, more than 76% were women, with the highest proportion being in their teens and twenties. 

Molka crimes became a central theme for South Korea’s #MeToo movement. Oftentimes perpetrators would walk away with time served and negligible fines while their videos continued to circulate on the internet. In the summer of 2018, upwards of 70,000 women took to the streets of Seoul to demand an end to molka crimes and protest the lackadaisical response from government and the judiciary.

People convicted of molka crimes can face up to five years’ imprisonment and fines of over $26,000, but data suggests that they rarely face such steep penalties. From 2014 to 2016, over 60% of those charged with digital sex crimes received fines of less than $2,200 on average. Those fines are often levied against those who distributed the footage, or are caught with it, and seem insufficient for preventing further crimes.

The question of how to prosecute these crimes and stamp out their long-tail effects has been more complicated than one might imagine. Child abuse images are illegal in Korea, as is pornography. In criminal cases involving pornography, all parties involved in its creation — including those who appear in a film or an image — are considered responsible. Digital sex crimes are largely handled in the same way as illegal pornography. Police have begun to show some awareness in cases where people had no knowledge that they were being filmed. But the prevalence of these incidents has laid bare the popular assumption that targets of molka crimes can somehow be blamed for what has happened to them.

Sexism and bias among law enforcement seem to be a contributing factor. A 2021 report by Human Rights Watch found that during police investigations, officers cast doubt on those who reported being filmed without consent, suggesting that targets had somehow invited or provoked these incidents. Officers would berate people for wearing provocative clothing or sending images to their intimate partners, things the authorities believed they shouldn’t have done in the first place. 

Digital sex crimes reached new highs during quarantine restrictions at the peak of the pandemic. At the Covid-19 quarantine ward inside the Wonju hospital, a man who filmed molka videos was arrested and sentenced to 10 months’ imprisonment for filming in a woman’s shower stall. 

The uptick could also be attributed to the country’s quick implementation and adoption of 5G networks. South Korea has one of the highest internet penetration rates in the world, with 98% of the country’s population online at the start of 2022. 

Today, a casual search on Naver, Korea’s answer to Google, yields dozens of cases in which young people are seeking help because of digital sex crimes, often describing run-ins with law enforcement officials who are unsympathetic or clueless about the damage that can be done in a virtual space.

“Some guy was following me on the bus stop and I heard his smartphone camera shutters go off,” one poster wrote on May 15, 2020, “but he denies he took any photos of me and deleted all of the photos so there’s no proof of them. So I’m trying to find a lawyer to represent my case to protect other victims like me.”

Another wrote on October 26, 2021: “I was at a motel w/ my girlfriend and noticed a camera across the street and reported it to the police. They caught him in three weeks. The police IDed the footage and confirmed it was my gf in the video. … How can I check if the illegal video was distributed somewhere?”

Another wrote on June 7, 2022:  “My bf illegally filmed me after I got out of shower in a motel. I can’t get it out of my mind. I feel so ashamed and guilty. …If I press charges to the police will I know the status of the investigation?”

That these posters were willing to report these crimes at all was a sea change in attitude from just a few years ago. While most lawmakers may still be catching up, many young people and technology experts have adopted more nuanced perspectives on where culpability should lie and how justice might be sought for targets.

Two years after the break-in at Seo-yeon’s motel room, student activist Seo-hui Choe was on her phone late into the evening. As a member of a group called Project ReSET (Reporting Sexual Exploitation in Telegram), Seo-hui would use a VPN and various fake identities to log into chat rooms on Telegram, Discord and the popular Korean chat app KakaoTalk, where she would join chats where sexually exploitative videos were being shared.

For more than a year, she had been following media reports about videos of sexual assault and child pornography circulating through private, encrypted messaging apps. Phishing-style attacks and social manipulation — catfishing, online-dating, promises of K-popesque stardom — led users to produce exploitative content, which was then used to blackmail them for more images and videos. 

The male-dominated chat rooms and online communities were reported to authorities, but police largely ignored the threat. Student journalists and activists, like Seo-hui, began gaining access to the rooms and reporting what they saw to Telegram and police. The social media app, according to Seo-hui, did nothing. And the authorities said they were powerless to pursue an international company like Telegram. The company did not respond to requests for comment for this article, but its terms of service do prohibit the distribution of illegal pornography on publicly viewable channels.

Seo-hui reviewed and reported the disturbing footage she found. She would see young women’s photos trafficked or videos and other media being offered for sale. Payments were accepted through cryptocurrencies.

A deepening divide between young men and women was soon exploited to help elect the conservative Yoon Suk-yeol to the presidential Blue House in early 2022: He ran on a platform of anti-feminism that promised to abolish Korea’s Ministry of Gender Equality and Family. Campaign promises aside, a ministry spokesperson said in an email that “the new government will make it a national task of guaranteeing the right to be forgotten and strengthening protection and support of victims of digital sex crimes.”

Meanwhile, the monitoring became a recurring nightmare for Seo-hui. Her work began to be noticed, and she became a target for abuse. Some online demonized her and her fellow activists, believing them to be radical, dangerous feminists under the employ of the Ministry of Gender Equality and Family and the previous presidential administration.

“After reporting these things, I was supposed to sleep, but all I could remember were the victims and the footage,” Seo-hui told me. “My blankets felt like they were on fire.” Her skin would crawl. Sometimes she would cry.

The media reports and Seo-hui’s own work led to continued revelations about the existence of these communities. In two group chats in particular, participants were distributing sexually exploitative videos and blackmailing dozens of women into sharing private videos online. Some of the footage they shared included rape. More than 103 individuals, 26 of whom were minors, had their videos or images sold to over 60,000 people. The chat rooms, known as the “Nth Room” and “Doctor’s Room,” were eventually shut down, and the users behind the channels arrested and convicted. On November 26, 2020, Cho Ju-bin, the 26-year-old “Doctor” who controlled the eponymous chat room, was sentenced to 40 years in prison for blackmail and sexual harassment. 

Shocking as the case was, this was not the first time that the harms and real world impacts of digital sex crimes, to say nothing of the difficulties surrounding regulation and prosecution, should have been apparent. There was already a precedent for pursuing and preventing such crimes. The website Soranet, described in headlines around the world as “South Korean porn,” was taken offline in 2016 due in part to a joint operation between the Dutch government and the U.S. Immigration and Customs Enforcement, as the website’s servers were hosted in the United States before moving to and then being seized in the Netherlands. Soranet’s co-founder was sentenced to four years in prison, a sentence criticized by many campaigners as too light. But taking down a website and arresting its founders had little effect on the proliferation of such material. Images kept being uploaded faster and shared more widely.

As Seo-hui scrolled through chat room after chat room, she saw little hope on her screens that this latest wave of outrage and police action would change much, even with reports of arrests and prosecutions leading primetime news broadcasts. The laws, in the aftermath of the Nth Room scandal, still seemed to consider sexual exploitation to be a form of pornography, and treated all parties as co-conspirators.

“People didn’t understand that sharing illegal sexual exploitation videos is a crime,” Seo-hui said. “So we wanted to educate people on this issue. It’s not porn, but sexual exploitation. After Nth Room we were given a lot of promises that weren’t kept. So we were just relaying messages from the victims to the police.” 

The videos, she wanted to impress upon the police and the public, were a violation; they were nonconsensual and had to be treated as serious crimes, not as the consequences of naivety and debauchery. Seo-hui says the Nth room was neither an anomaly nor a turning point in bringing about real change and accountability for the molka crime industry. 

“It’s just the tip of the iceberg,” she told me. “It happened before, and it’s happening still.”

Since 2020, Seo-hui has stopped monitoring the internet for examples of digital sex crimes. The vicarious trauma of witnessing those crimes took a toll on her. And she reached a point where she felt powerless to affect change. If she were to stop monitoring and reporting for a minute, she told me, dozens more rooms and tertiary conversations cropped up when she returned. If she stopped for one night, putting her phone away so that she could rest before another day of classes, thousands more would appear across the web.

Seo-yeon, too, felt disheartened and diminished by the prospect of an ever-increasing numbers of digital sex crimes and a society that showed little respect for women. She disbanded Digital Sex Crimes Out in part because of the growing resistance to their work and the risk to her personal safety. 

“There is inequality online,” she said. “But nowadays I just avoid those environments altogether so I think less about inequality.”

Instead, Seo-yeon decided to focus on her career as a software engineer as a form of resistance. “I wanted to understand computer technology in order to understand how to push for laws to prevent digital sex crimes,” she told me. Seo-yeon says there is still much work to be done. “Just because we have a new law, that doesn’t mean everything is functioning well now,” she said about her work advising Korean courts on digital sex crime prosecutions. 

“I don’t think it’s just a Korean issue. In countries like the United States and the United Kingdom, illegal sex videos are big business,” she said. “Everyone lives in the digital age.”

Seo-yeon is not alone in her belief that this is not just a Korean issue. It may be that Korea is simply further ahead of most other countries when it comes to the quality of its technological infrastructure and the omnipresence of cameras.

“I see these women as the canaries in the coal mine in a way,” Heather Barr, the associate director of the Women’s Rights Division at Human Rights Watch and the author of a report on digital sex crimes in South Korea, told me. “I think that what’s happening — this particular issue — is very dystopian, but also a sign of where the rest of us may be going.”

Behind a locked door and down a nondescript lobby in the ritzy Gangnam district of Seoul is the office of the Santa Cruise company, self-described as a provider of “digital laundry services.” 

The prevalence of molka crimes in Korea has given rise not only to groups like ReSET, but also to an industry of digital reputation managers. For roughly $2,000 a month the company does its best to wipe those digital traces from existence. Some customers have 10-year subscriptions. Others pay in three-, six- and 12-month intervals.

“I didn’t set out to do this job from the beginning,” Kim Ho-jin, the CEO of Santa Cruise, said. Santa Cruise began as a model and talent agency. But soon his clients came under attack online, with accusations and rumors flung about Google and Naver. “They were not able to go to school and were in and out of a mental hospital because of malicious comments.” 

Kim began filing requests to the search engines to remove the material and actually had some success. People with similar problems began to seek his help. Kim now counts scores of entertainers, K-pop stars and company executives among his clients. 

Today, more than a quarter of Santa Cruise’s business comes from people who believe they are targets of digital sex crimes and want to manage their online reputations. Each month, Kim’s team of young researchers send a report of the data found and deleted to their clients. Teenagers comprise roughly half of Kim’s business, and twenty-somethings represent about 30% of his clientele.

Digital entertainment culture is paramount for many teens and young adults in South Korea who extract social value and even their belief systems from idols on social media or television. In the last several years, this blurring of digital and physical existence preceded the suicide deaths of prominent K-pop figures like Goo Hara, Kim Jong-hyun and Jang Ja-yeon, all of whom died in their late twenties after being illicitly filmed in private or by partners during sexual acts. The videos were then distributed or streamed online. 

Hate speech and derogatory comments online have also led to suicide deaths. While Korea’s strictly hierarchical culture may have deep roots, it has a profound hold in the digital world, where it is mirrored by the reward-driven and facile acceptance or rejection of people online. These are worlds that young people are ill-equipped to differentiate between. K-pop stars have also used spy cameras to film unsuspecting romantic partners or strangers — incidents that make these kinds of crimes seem somehow acceptable or even cool.

In 2019, the K-pop star Seungri, of Big Bang, and a nightclub owner were found guilty in a scandal that involved spy camera videos, prostitution and embezzlement, among a slew of other offenses. Seungri was alleged to have embezzled $951,000 but was sentenced to just three years in prison, later reduced on appeal to one and a half years and a reduced fine.

Lee Seung-hyun, known as Seungri, is taken into custody as he leaves the High Court in Seoul on May 14, 2019. Photo by Ed Jones/AFP via Getty Images.

The issue has also entered the national zeitgeist through television, where one popular program depicted young men gifting a hidden spy camera to a colleague (“Business Proposal”) and another featured an episode in which the molka victim dies by suicide (“Hotel Del Luna”).

“These young people have a lot of power. So the problem is not so much what these people do but how society responds to them,” Dr. Pamela B. Rutledge, a social scientist and director of the Media Psychology Research Center in California, told me. 

“You see something in the media and then you do it,” she said. “You see something, you process that in your psychosocial environment, and then you watch to see what happens to that person, all the while assessing whether that’s something you can actually do, but also to see whether they are rewarded or punished,” Dr. Rutledge said.

At the Santa Cruise offices, there was little to suggest anything beyond a desire to rid the internet of its ability to wreck lasting havoc over a mistake or regret. But a game of blame and shame was nevertheless enforced. For teenagers who cannot afford to pay for services provided by Santa Cruise and who are too ashamed, embarrassed or worried about telling their parents they need help, Kim offers his services pro bono, with one caveat. They are made to write a “reflection letter” about digital citizenship and the choices that led them to him.

“Even the victims are to be blamed,” Kim told me. “The fact that they film themselves is wrong in the first place. They have to recognize that. The people who fell victim to spy cams are also to be blamed because they weren’t being careful enough.” He added, “If they don’t agree to write these letters, I don’t delete the illegal content for them. So they have to agree with me.”

The crimes surfaced and reported both by Santa Cruise and ReSET were ammunition for the passage of what became known colloquially as the “Nth Room law.” In the aftermath of the Nth Room case, amendments to Korea’s Telecommunications Business Act brought new illegal content filtering and transparency reporting requirements for big social media companies. But most people did not realize what the law would mean until it went into force and filtering notifications began to appear on their phones.

In group or public chats, if you uploaded a video of anything — from a cute cat to an oblivious naked person sleeping in their bed — you would receive a notification that looked something like this: “According to the new Telecommunications Business Act, the Korea Communications Standard Commission is reviewing the content to see if it’s illegal.” 

Another similar message read: “Identification and restriction of illegally filmed content: videos. Compressed files that are sent through group open chat rooms will be reviewed and restricted from sending if it is considered an illegally filmed content by related law. You may be penalized if you send illegal filmed content, so please be cautious while using the service.”

Public outcry over censorship soon overshadowed memories of the Nth Room case and the sexual crimes committed against scores of women and girls. One of the biggest opponents of the law filed a constitutional complaint long before the outrage reached the public and political spheres. Open Net Korea, a nonprofit with the aim of maintaining freedom and openness for internet users, said the policy infringed on the public’s freedom of expression and right to know. 

“When something gets done in the National Assembly, I think it appeases the general public, and we get to move on from it,” Jiyoun Choe, a legal counsel at Open Net, told me at the organization’s office in Seoul. “But we shouldn’t move on unless it’s actually been taken care of and solved, which it’s not really being done right now.”

The filtering law went into effect in 2021, but most major companies based outside of Korea have yet to fully implement the process, due in part to some of the technical hurdles it presents. Under the law, companies can either put in place their own filtering systems that will prevent illegal content from being posted and distributed, or they can use a system built by Korea’s Communications Standards Commission. Those that choose to use their own systems must have them vetted and approved by the Commission. Filtering is a mandatory requirement for all websites operating in Korea that handle more than 100,000 daily users and offer some way for users to post original content.

The Commission’s system mimics software built by Microsoft that major tech companies like Google and Meta use to combat child and sexual trafficking. It assigns a unique number — similar to a barcode, but known in technical terms as a hash value — to photos and videos that contain illegal images so that they can be more easily found and removed from the web whenever they are re-shared or posted. This information is then placed in a database maintained by the Commission. This prevents the recirculation of footage found by the police or reported by users, like those working at Santa Cruise and ReSET. 

But, of course, the system cannot prevent new crimes. “It doesn’t criminalize the actual activity itself,” Open Net’s Jiyoun said. “This law itself will not be effective in preventing people from going back on Telegram, or even deeper into the internet, to continue doing what they were doing altogether,” she said.

“It just asked the company to restrict what’s being shared.” In so doing, she said, “we are supposed to blindly trust that they’re doing their jobs. We don’t have information on how often [the hash database is] updated, or how they presume to know if this content was created illegally. So there’s a lot of problems with transparency.”

And the burden it has placed on companies is twofold: some fear they are signing onto a future censorship apparatus, given that the Korean government developed the software. What’s more, the increase in server and networking capabilities could unduly burden smaller social media platforms or special interest forums that cannot afford these additional costs.

While foreign companies like Meta and Alphabet were given an extension to implement their own connection to the government’s database of hash values, Pinterest was the only foreign company operating in Korea that agreed to use the government’s proprietary software to vet its users’ content. (Pinterest did not respond to Coda Story’s request for comment or additional information.)

“Many broadcasters misunderstand filtering technology. You think that when a video is uploaded, we look at the content and filter it,” said Kim Mi-jeong, of the Illegal and Harmful Digital Content Response Division Consumer Policy Bureau at the Korea Communications Commission, which oversees the Korea Communications Standards Commission. “That’s not the case. There is already a video that [the Commission] has determined is illegal. When a user uploads a video, technically, only feature information is compared to determine whether the video is illegal or not. So that’s a huge misunderstanding.” 

In a statement, the Commission said it only targets “public bulletin boards, not private messages,” though many of these public boards also include anonymous chat rooms.

“Now, people have come to realize that anyone can be punished if they take pictures of someone else’s body,” said Lee Young-mi, an attorney and director at the Korean Women’s Bar Association. Lee noted that the benefit of such a law is that it shifts public perception on right and wrong, challenging long-held beliefs in a society that remains hierarchical and patriarchal.

“In terms of reducing the number of children who [are exploited], I think it’s very good. It’s positive,” Lee said of the filter law. But, she added, companies like Apple are not cooperating enough with law enforcement and government requests to turn over data and information. She said companies should be less concerned with privacy and more concerned with investigating criminals who use the technology, whether it be hardware or software, to do harm.

Soo-jong Lee, a criminal psychologist at Kyonggi University, told me that even with the enactment of the law, it was difficult to change the culture. She explained how, as a side effect of K-pop culture, people are seeking stardom through random chats and illicit messages. At the same time, “our culture also blames the victims,” she said. 

“We say the world has changed; we say [blaming victims] is not acceptable. But it happens only on the surface. Below that surface, there is still a sense of purity that people care about. There must be a discriminatory view of women in particular.”

The filter law became political cannon fodder during the recent presidential election. It divided the ruling and opposition party candidates, with the People’s Power presidential candidate Yoon writing on Facebook, “If videos of cute cats and loved ones are also subject to censorship, how can such a country be a free country?” The Democratic Party presidential candidate Lee Jae-myung said, “All freedoms and rights have limits.” The divisions within the culture were being drawn along both gender and free speech lines, with many young men feeling the law was an overreaction.

People innovate faster than laws can be administered. Sometimes, that hurried approach to national or internet security runs afoul of the future. Despite the well-intended use of the system now, it could set the country on a path to future misuse.

“As there have been cases of malfunctions in censorship, I think that the law is poor and lax, and if you want to prevent the distribution of illegal things, I think that follow-up measures through reporting are usually correct,” one male teenager at the Seoul National University told me. He did not want to give his name. “I think that controlling and censoring things in advance can lead to a system where all users are seen as potential criminals.”

Seo Ji-hyun, a former public prosecutor at the Tongyeong branch of the Changwon District Prosecutors’ Office, became widely recognized as a pioneer of Korea’s #MeToo movement after she said in a live television interview that she was groped by her superior. Seo felt that the filter law was a step in the right direction, though ineffective. Those who criticized it, she said, missed the point that the subject matter needed to be addressed, and any measures to further crack down on sexual exploitation, trafficking and digital sex crimes were welcome. 

She also believes that well-drafted and thought-out legislation can spur social and civic change. Yet, despite her own work on the Ministry of Justice Digital Sex Crimes Task Force this past year, little has changed. Her team made 11 policy recommendations to the National Assembly on how to prosecute and handle digital sex crime cases. Among those recommendations was a plan for how to effectively seize and prevent the redistribution of illegal videos. But only one recommendation, to unify an application process for survivor support, was implemented in April 2022.

It may be years before Open Net Korea’s constitutional court complaint is resolved. Jiyoun said that the law does not help targets enough and, in the meantime, places “a lot of the onus on the corporations, which could be detrimental to the internet.”

“Having [social media companies] be responsible for any content even entering their platforms would just give the companies incentive to not allow more content onto their platforms, which would be bad for democracy,” Jiyoun said. “Companies would have to cover their liability by not allowing anything to be uploaded to the internet. That would hinder the role of the internet as a vessel for information, where people can whistleblow or participate in the #MeToo movement, everything that is needed for democracy to thrive.”

Dressed in matching yellow vests, So-yeon Park and Bo-min Kim drove to the Wonju Hanji Theme Park, a cultural center devoted to traditional Korean printing methods and paper pressed from the bark of mulberry trees. The park’s managers greeted them with a warmth reserved for old friends.

So-yeon placed a yellow A-frame resembling a “Caution Wet Floor” sign outside the women’s restroom and made sure no one was inside. Bo-min set a hard plastic storage case on the sink vanity and pulled out a pair of spy camera detectors. The two women set to work.

“This is a new building, so there are no holes for installing a hidden camera,” So-yeon said as she pushed open all of the stall doors and held the camera detector to her eye.

The flashing red lights flickered across toilet bowls and the walls of each stall, waiting to bounce off hidden cameras and transmitters. She then pointed the instrument at the ceiling, checking the air conditioner unit and the extinguisher system. The two women placed small, circular blue stickers over anything that resembled a hole.

“If we see a trash can, a lighter or a soda bottle, we look into it,” Bo-min said, noting that spy cameras “can look like just about anything.”

Across the country women’s bathrooms are plastered with resources to detect and report illegal filming. In the final image, the text reads “I’m watching you” with the phone number of the government’s Digital Sexual Offense Victim Support Center. Photos by JeongMee Yoon.

Son-hae Young is no stranger to these mechanisms. The founder of Seohyun Security, a company that specializes in removing illegal cameras, wiretaps and other tracking tools, Son-hae trains police officers and corporations on security precautions and works with a team to sweep hotels and school buildings. “Inspection is done with the national budget here. It is necessary to let people know how hidden cameras are being modified and how fast the technology is changing,” he told me.

“There is no other place in the world,” he said, “where elementary, middle and high schools are regularly inspected for hidden cams.”

From a brown paper bag he pulled a chalkboard eraser, a bottle of Coca-Cola, a clock, a mirror and a USB storage drive. All of them contained hidden cameras.

In September 2018, in response to tens of thousands of #MeToo protesters holding banners proclaiming, “My body is not your porn,” the Seoul City Government announced it would increase public bathroom inspection by assigning 8,000 employees to inspect the city’s more than 20,000 bathrooms on a daily basis, a step up from the previous 50 employees and monthly inspections. By law, South Korean cell phones must now emit loud shutter noises when a photo is taken, a feature which cannot be deactivated. Teams are also deployed to check bathrooms and locker rooms in schools.

Today, Seoul’s 8,000-strong camera detection crew has been all but abolished. The official line was that they found few cameras, given how rapidly cameras were removed by inspection crews. They now run spot checks, sometimes teaming up with a security company like Seohyun Security, to check restrooms twice monthly. But on President Yoon’s watch, digital sex crimes are expected to rise substantially, even as underreporting remains a problem. 

Various South Korean federal and municipal agencies responsible for efforts to curb the rise in spy cameras in public places — including the Seoul City Government’s Women’s Safety Business Team and the Department of Women and Family — declined to comment for this article.

As we drove back to City Hall, So-yeon told me she was proud of being a deterrent and felt that the visibility of her team reduced such crimes, at least in her city. Bon-min’s daughter lives in Seoul. “She is very afraid to go to the bathroom on the subway or anywhere public,” Bon-min said. “We are very proud, and she is very proud that we are doing this work.”

Back in Seoul, on the red line at the Jeongja subway station, a train glided in. The platform doors opened and a queue of commuters stepped out of the car before those waiting to board stepped in, everyone’s politeness on public display.

The doors closed behind the passengers. A large electronic eye, the prototypical vision of HAL 9000, appeared as the subway car doors met — a public service poster. “Ban illegal filming. Your shutter sound can lead to jail time,” the text below the eye read, followed by a stark hashtag warning: #iwillwatchyou. 

No further explanation is required for riders who can never be certain of who is being watched and who is doing the watching.

To report a crime involving non-consensual intimate images, or learn more about how to support survivors, visit https://stopncii.org.

The post In South Korea, women are fighting to end digital sex crimes appeared first on Coda Story.

]]>
Democrats want to prevent attacks on dissidents living in the US https://www.codastory.com/authoritarian-tech/democrats-bill-transnational-repression-erdogan/ Mon, 12 Dec 2022 18:25:53 +0000 https://www.codastory.com/?p=37733 A new congressional bill would penalize foreign regimes for targeting dissidents in the U.S., but partisanship and geopolitics risk getting in the way

The post Democrats want to prevent attacks on dissidents living in the US appeared first on Coda Story.

]]>
In May 2017, Turkish President Recep Tayyip Erdogan’s bodyguards and supporters attacked Lucy Usoyan on a Washington, D.C. street, outside the Turkish ambassador’s residence, just ten minutes from the White House. 

“It was very quick and unexpected,” Lucy Usoyan told me over the phone. “You never expect to be under the foot of a president’s bodyguard.” U.S. State Department documents obtained by Usoyan’s lawyers indicate that Erdogan witnessed the attack and may have ordered it to be carried out.

Authoritarian regimes are increasingly ignoring the sovereignty of other nations to lash out at dissent abroad or locate and punish citizens who have found refuge in another country. In what experts label “transnational repression,” governments like Erdogan’s are intimidating people through online disinformation campaigns and, increasingly, by physically targeting them for violence.

The U.S. Congress has responded by introducing a bill designed to crack down on the targeting of Americans by foreign regimes. The Stop Transnational Repression Act, which aims to define and criminalize transnational repression in federal law, would impose a maximum 10-year sentence for those convicted of the crime. 

The bill “would be a very powerful deterrent to folks who want to try and undertake these actions on behalf of their governments,” Annie Boyajian, the vice president for policy and advocacy at Freedom House, said.

Figuring out how to effectively counter acts of transnational repression — which by definition are acts committed by sovereign foreign governments — is challenging for legislators. Freedom House has warned that it is difficult to distinguish “legal activity on behalf of a foreign power or entity from illegal activity, and thus to address transnational repression threats before they escalate.”

The bill has been introduced at a politically fraught moment. The bill’s co-signers are all Democrats in a House soon to be controlled by the Republican Party. And President Joe Biden’s ability to maneuver is constrained by energy politics and global pressures fueled by Russia’s war in Ukraine. In June, Uzra Zeya, a State Department under secretary, affirmed the Biden administration’s strategy to tackle threats posed by China by using tools such as imposing visa restrictions, controlling technology exports that could be used to conduct acts of repression and enhancing law enforcement.

In October, the U.S. Department of Justice charged seven individuals with conducting a campaign to surveil and coerce a U.S. resident to return to China as part of an effort called “Operation Fox Hunt.” The operation is part of a strategy designed to target people outside of China which, alongside Operation Sky Net, claims to have caught 8,000 international fugitives. The Chinese state says these individuals are accused of committing financial crimes, but some are dissidents and whistleblowers.

A weak link is federal communication with local law enforcement, analysts say. The FBI has set up a transnational repression hotline, but local police fail to “understand the full scope of the threat” posed by foreign regimes, Boyajian, from Freedom House, said. By codifying transnational repression into law, she said, the bill will encourage law enforcement agencies to take transnational repression more seriously.

Biden came under fire last week when a U.S. judge dismissed a lawsuit against the Saudi Crown Prince, Mohammed bin Salman, for the murder of U.S.-based journalist Jamal Khashoggi. The judge said that while he felt uneasy about it, his hands were tied because the Biden administration had made a recommendation that the Saudi leader be given political immunity.

The starkly different approaches to transnational repression committed by the Saudi royal family and the Chinese Communist Party are an indication of how efforts to stop and prosecute transnational repression are diluted by America’s wider geopolitical goals. The U.S. is currently taking an aggressive posture against China’s government, while countering transnational repression from Saudi Arabia risks souring relations with a major oil supplier.

The post Democrats want to prevent attacks on dissidents living in the US appeared first on Coda Story.

]]>
Killer robots have arrived to Ukrainian battlefields https://www.codastory.com/authoritarian-tech/killer-robots-ukraine-battlefield/ Thu, 08 Dec 2022 15:54:01 +0000 https://www.codastory.com/?p=37267 A new generation of autonomous machines is appearing in Ukraine. They augur a new military era, offering capabilities that far outstrip current weapons

The post Killer robots have arrived to Ukrainian battlefields appeared first on Coda Story.

]]>
Amid Ukraine’s muddy trench warfare, grinding artillery bombardments and Soviet-era tank battles, a futuristic digital war is waged as the line between human and machine decision-making becomes ever thinner.

Since Russia invaded Ukraine in February, AI-powered drones — both homemade and highly sophisticated — have been deployed on an unprecedented scale on the battlefield. Russia has reportedly used the Kalashnikov Kub and Lancet Kamikaze “highly autonomous” drones. Ukraine has relied on the Turkish Bayraktar TB2 that has autonomous flight capabilities and boasts “laser guided smart ammunition.” The U.S. has committed to sending Ukraine 700 Switchblade kamikaze drones and “Phoenix Ghosts” that use GPS-tracking and object recognition software.

But now a new generation of autonomous machines — colloquially known as ”killer robots” — is debuting in Ukraine. They augur a new military era, offering capabilities that far outstrip the current generation of weapons, and are no longer limited to drones in the sky or sea. They are poised to upend modern warfare and introduce new challenges, lethality and concerns.

In late November, Germany discreetly announced that it would provide 14 tracked and remote controlled infantry vehicles for support tasks as part of this year’s $1.64 billion spent on military support for Kyiv. These unmanned vehicles rely on far superior tech to similar robots used during wars in Afghanistan and Iraq — mostly for landmine disposal.

Estonian military contractor Milrem Robotics, the maker of the Tracked Hybrid Modular Infantry Systems unmanned ground vehicles, also called “THeMIS,” will provide Ukraine with units primarily designed for casualty evacuation, an example of how the war in Ukraine is serving as a testing ground for cutting edge, but unproven, technology.

Milrem Robotics CEO Kuldar Väärsi, said their THeMIS vehicles, which can be outfitted with light or heavy machine guns and anti-tank missiles, are “considerably cheaper than a tank” and will be a common sight on the battlefield in coming years.

“As with all new technology, especially technology that hasn’t existed before, concept development and experimentation are needed to see how it fits into the doctrine before large quantities will be deployed,” he said.

Germany’s ministry of defense invested in THeMIS at an early stage of development, but in a design version for saving lives rather than its lethal version, according to a source familiar with European military procurement. A German ministry of defense spokeswoman declined to comment citing security reasons.

Some experts have begun to sound warnings, worried that military aid to Ukraine is substituting flashy new-fangled weaponry over proven, effective conventional arms deliveries. 

“Much will be made of the importance of using emerging and disruptive technologies in wars of the future,” Daniel Fiott, professor at the Brussels School of Governance and Fellow at the Real Elcano Institute, said. But the lure of high-tech solutions should not come at the expense of conventional arms deliveries to Ukraine, he argued.

“No doubt, many powerful militaries will be arguing that the application of high-tech solutions will be needed to enhance the performance of arms and give militaries an advantage in the information space,” Fiott said. 

Ukrainian robotics company, Temerland, has released a weaponized reconnaissance robotic platform called GNOM, designed as an anti-mine vehicle that is tailored for operational combat units. “In the next decade we will see the introduction of ground-based drones with automation elements and further increase AI for independent response and decision making,” Eduard Trotsenko, the CEO of Temerland, said.

Meanwhile, NATO allies like the Netherlands are already testing AI-powered robotics. Lieutenant Colonel Sjoerd Mevissen, commander of the Royal Netherlands Army’s Robotics and Autonomous Systems unit, said every war is a technology test. 

“We see a big advantage in the future, having these types of systems,” he said, referring to the THeMIS unmanned ground vehicle. “It will also lower the cognitive and physical burden for soldiers when they are able to deploy more of these vehicles.”

Colonel Mevissen said pricing — each unit costs approximately $350,000 — remains a significant barrier to having these types of robots fighting side by side with soldiers in the short term. 

Russia’s war of aggression has spurred Ukrainian homegrown military tech innovation. Ukrainian soldiers have modified commercial drones for the frontlines, and a whole suite of tech ingenuity has come together in groups. Ukrainians call it hromada, a self-organized community.

In late October, Ukraine’s Minister of Digital Transformation Mykhailo Fedorov told a NATO conference that Ukraine was developing “Delta,” a situational awareness platform that helps soldiers locate enemy troops and advises on the best coordinated responses. Delta was instrumental in helping Ukrainian troops retake Kherson from Russia, in what Fedorov described as “World Cyber War I.”

Former President Petro Poroshenko presents the THeMIS Unmanned Ground Vehicle in August 2022. An automatic cannon, an anti-tank system — including Stugna or Javelin — or a reconnaissance system can be placed on the combat robot. Mykhaylo Palinchak/SOPA Images/LightRocket via Getty Images.
The THeMIS vehicle is controlled remotely, in this field test by a drone operator who launches a quadcopter to monitor the operation of the evacuation robot. The multi-purpose crawler, named “Zhuravel” or stork, will be used for evacuating wounded soldiers on the front line where it is difficult for medics to reach by vehicle or on foot. Mykhaylo Palinchak/SOPA Images/LightRocket via Getty Images.
A 2018 demonstration of an armed Milrem Robotics THeMIS Adler in Villepinte, France. Christophe Morin/IP3/Getty Images.
Previous
Next

To counter Russia’s drones, many of which are made in Iran, Ukraine’s army has deployed newly designed Lithuanian “SkyWipers” that have the capacity to not only shoot down Russian drones but to take control of them, effectively to hijack them, in the first widespread use of such devices.

But much of the most advanced killer robot work is kept within the borders of NATO countries. U.S. military and European defense companies are withholding much of their latest high-tech equipment to prevent it from ending up in the hands of Russia or China, said Fiott, the professor from Brussels.

In late November, the U.S. Navy launched a “Digital Horizon” exercise to develop the world’s first “unmanned surface vessel fleet.” U.S. General Erik Kurilla recently told a conference in Bahrain that AI-powered marine drones intercepted a dhow sailing ship carrying thousands of kilos of explosives in the Arabian Gulf “without any orders and without the team in the operations center even pushing a button.”

U.S. defense giant Lockheed Martin has developed a crew-less helicopter called Matrix that demonstrated flying autonomous missions in October. And the first squad of pilot-less aircraft “wingmen,” which fly alongside manned fighter jets, are being developed for the British army’s 20-year “radical transition” plan, dubbed Future Soldier. 

These types of drone projects are more successful because AI can better model and navigate homogenous and predictable environments in the sky and sea, according to Max Cappuccio, a Canberra, Australia-based academic and co-author of a research paper entitled “Saving Private Robot.” “I don’t think anybody could say exactly when fully autonomous ‘killer robots’ will be ready to be systematically deployed in contested scenarios,” he said.

Regardless of when fully autonomous military technology comes online, Mevissen, the colonel who heads the Dutch army’s robotics unit, believes the world faces a “new arms race,” one of constant software redesign, AI development and cybersecurity upgrades.

“The hardware is quite easy,” Mevissen said. “So, this is mainly a race for software.”

As a result, militaries are adjusting recruitment strategies to meet an urgent need for software engineers, AI experts and soldiers able to work with tech-rich equipment.

“You need good soldiers who are also very good gamers,” Mevissen said.

Critics disagree. “We need to prohibit autonomous weapons systems that would be used against people, to prevent this slide to digital dehumanization,” Human Rights Watch argued in a campaign against the deployment of fully autonomous weapons.  

In 2023, the Dutch government will host the world’s first international conference on the military applications of AI. 

Colonel Mevissen counseled calm: “Humans are giving the system the target. We are giving the system the mission. What is possible only comes from us.”

The post Killer robots have arrived to Ukrainian battlefields appeared first on Coda Story.

]]>
As anxiety about crime peaks, US cities look to surveillance tech. But does it actually work? https://www.codastory.com/authoritarian-tech/us-city-surveillance/ Thu, 10 Nov 2022 16:18:54 +0000 https://www.codastory.com/?p=36346 From San Francisco to New York, even progressive enclaves are turning to authoritarian tech to appear tough on crime

The post As anxiety about crime peaks, US cities look to surveillance tech. But does it actually work? appeared first on Coda Story.

]]>
In the run-up to the U.S. midterm elections, public anxiety about crime became a flashpoint. While campaigns for right-wing candidates in battleground states painted alarming pictures of cities riddled with crime under the control of Democrats, voters, too, expressed real concern about the issue. An October survey by Pew Research Center showed that 61% of registered voters viewed violent crime as “very important” to their vote.

Even in Democratic-majority cities, public anxiety about crime seems to be peaking. Determined to assuage people’s concerns (and keep their votes), major cities including San Francisco, Chicago, and New Orleans are turning to technical surveillance as a solution.

This marks a big shift, especially for a city like San Francisco, which in 2019 became the first U.S. city to ban the use of facial recognition technology by local public agencies, including the police. Boston, Portland, Oakland, and Jackson, Mississippi, have since followed San Francisco’s lead, passing similar restrictions of their own that prevent public agencies from using privately-developed technologies to identify individuals in the course of criminal investigations or other procedures. 

Spearheaded by privacy advocates and buoyed by mass protests against police abuse after the killing of George Floyd, these policies were intended to keep cities from treading into the legal and ethical gray area where facial recognition technology currently sits. 

But now the tide seems to be turning. A recent poll found some 65% of voters in San Francisco report feeling less safe today than they did in 2019.

“We went from a long-term view to an extremely short-term view,” explained Tracy Rosenberg, the advocacy director for Oakland Privacy, a group that advocates for surveillance oversight in the Bay Area. 

“The narrative that was dominant in 2019 was the long-term implications of the ubiquitous use of facial recognition, which is basically the end of public anonymity. And I think that narrative has largely been replaced by a narrative that [says]: ‘Who cares about the future when right now your car is getting stolen or your store is being looted?’ And that basically the short-term implications on your life right now are more important than any sort of future surveillance state that might develop.”

Security cameras on Rodeo Drive, part of an extensive network of surveillance cameras throughout Beverly Hills. Photo: Mel Melcon / Los Angeles Times via Getty Images

Public concern about crime has clearly gone up, but national crime data reveals a complex picture. The murder rate spiked in 2021, reaching its highest point in nearly 25 years, but now appears to be decreasing, with homicides in major cities down nearly 5% in 2022. All other kinds of violent crime have held steady or dropped since 2019, according to Pew. And cities’ experiences with violent crime are not uniform. As of November 2022, murders have increased by nearly 30% in New Orleans and Charlotte compared to the same time period in 2021, and decreased in others, including San Francisco and Oakland. 

Despite San Francisco’s pioneering ban on the use of facial recognition technology, in September 2022 the city’s Board of Supervisors passed a policy that will allow law enforcement to access the video footage of private security cameras in real time. During a 15-month pilot phase, San Francisco police will be able to view up to 24 hours of live video footage from private surveillance cameras during criminal investigations and large public events. 

In a letter to city officials, a coalition opposing the ordinance, including the American Civil Liberties Union (ACLU) of Northern California and the San Francisco Public Defender’s Office, argued the proposal “massively expands police surveillance” and could give officers the ability to “surveil any large gathering of people in San Francisco, including the crowds that gather for the Pride Parade, street markets, and other political and civic events.”

The Electronic Frontier Foundation’s Matthew Guariglia described the Board’s decision as an attempt to “[put] voters at ease that something, anything is being done about crime.”

These San Francisco legislators are not alone. Their decision reflects a broader trend playing out in left-leaning cities nationwide. Cities are expanding the use of surveillance technology to reduce crime, or at least assuage some citizens’ concerns about crime, sometimes without clear evidence that these tools are effective as such. These cities also risk entrenching a permanent surveillance infrastructure that may be difficult to dismantle down the road. “The history of surveillance suggests that it’s not easy to put the genie back in the bottle,” argues Rosenberg. 

One of  the most high-profile examples of this dynamic comes out of New Orleans, where lawmakers are poised to expand police surveillance less than two years after passing a sweeping facial recognition ban. In July, the New Orleans City Council voted to allow the city police department to request access to facial recognition technology from the Louisiana State Analytical and Fusion Exchange, which analyzes data for police, to investigate certain kinds of crimes, including rape, murder, carjacking, robbery, and “purse snatching.” 

The ordinance passed amid a surge in violent crime in New Orleans not seen since the mid-1990s. In early July, just weeks before the city council approved the policy, New Orleans reportedly had the highest murder rate in the nation. Supporters of the measure, including the city’s mayor, claimed that it would help police rein in crime by helping officers track down perpetrators more effectively. 

This raises a critical question: Do these tools actually help reduce or solve crimes? As one city council member who voted against the New Orleans policy pointed out, the argument was not backed up by empirical evidence. 

During a hearing on the vote, an official with the police department admitted that he had no information about how frequently the department used facial recognition before it was banned in 2020 and whether its use had led to any arrests or convictions. “You have no data, sitting here today, telling me that this actually works, that it leads to arrests, admissions or clearances,” the councilmember Lesli Harris said. 

The Louisiana chapter of the ACLU blasted the council’s decision to “expand racist technologies,” highlighting research that has found that facial recognition disproportionately misidentifies women and people of color. A 2019 federal study found that the majority of facial recognition systems are biased, misidentifying Black and Asian faces at significantly higher rates than their white counterparts. 

These flawed matches have real-world consequences: At least three Black men in the U.S. have been wrongfully arrested after facial recognition software incorrectly identified them for crimes they did not commit.

Elsewhere, cities are embracing a controversial gunshot detection surveillance technology that a study from the Northwestern School of Law found to be “inaccurate, expensive, and dangerous,” sending police on “unfounded deployments” in predominantly Black and Latino neighborhoods. The technology, ShotSpotter, uses a system of discrete acoustic sensors to identify the location of gunshots and then send an alert to the police, who can then decide to send an officer to the scene of the alleged crime.

The firm has contracts in over 120 cities nationally, some of which have come under fire for pouring millions into a technology that critics say is error-prone and ‌ineffective. ShotSpotter contests claims of inaccuracy, saying the technology has a 97% accuracy rate. But a 2021 analysis of the Chicago Police Department’s use of ShotSpotter by the city’s Office of Inspector General found that just ​​9% of alerts were linked to gun-related crimes.

A recent class action lawsuit, filed by the MacArthur Justice Center at Northwestern University, alleges that the city “has intentionally deployed ShotSpotter along stark racial lines and uses ShotSpotter to target Black and Latinx people.”

Despite such criticisms about the technology and its impact on policing, cities are still using it. Earlier this month, the Detroit City Council ended a months-long, divisive debate about whether to expand ShotSpotter when it approved a $7 million contract to deploy the system to 10 new neighborhoods in the city. Detroit’s decision came just days after Cleveland’s City Council voted to quadruple the size of ShotSpotter’s current use area. Other cities that have recently moved to expand or renew contracts include Sacramento, Houston and Chicago.  

Meanwhile, in New York, Mayor Eric Adams, whose ‘90s-style “tough on crime” rhetoric has been a hallmark of his campaign and time in office, has been a vocal proponent of high-tech policing, including facial recognition and gunshot detecting technology like ShotSpotter. Adams, a former New York City police officer, has sought to dramatically expand the use of facial recognition within the police department and has expressed interest in installing metal detectors in city subway stations and replacing school metal detectors with new technology that would scan students for weapons. 

The overall picture, says Albert Fox Cahn, the founder and executive director of the Surveillance Technology Oversight Project in New York, is one of “surveillance opportunism” in which technology companies are pitching surveillance systems to lawmakers and law enforcement agencies seeking to quell concerns about public safety. To promote these technologies, Fox Cahn added, some public officials have positioned the expansion of surveillance in cities as a more humane alternative to traditional policing.

Guariglia of the Electronic Frontier Foundation explained, “Surveillance doesn’t come without the iron fist of the police department. Because even if they capture something on surveillance and they want to arrest a person, that person is not going to be arrested by a camera. They’re going to be arrested by a person with a nightstick and handcuffs and a gun.” At the end of the day, this trend pushes them towards a vision of citywide surveillance favored by some of the world’s most authoritarian regimes.

For now, San Francisco’s facial recognition ban remains intact. But some civil liberties advocates worry that the decision by the city’s Board of Supervisors to grant the police wider surveillance powers could give license to other cities and jurisdictions to follow suit. 

“I think that’s one of the most disturbing parts of what happened in San Francisco,” explained Oakland Privacy’s Rosenberg. “Because when you don’t have those facial recognition bans in place, the green light from a big city, a progressive city, a city that’s been famous for innovations in surveillance and looking at things with a critical lens — I think it provides a sort of implicit invitation to other cities that don’t have these bans in place to jump on the bandwagon.”

Still, as many privacy experts are quick to point out, it’s unclear if this trend will have staying power. They point to the general ebbs and flows of crime — at its peak, a sense of public insecurity tends to garner more support for policing and a willingness to erode civil liberties than it may when citizens feel safer — as well as the strength of the growing anti-surveillance movement. 

“Five years ago, it was unimaginable that there could have been a ban on any type of surveillance technology,” ​​Matt Cagle, a senior staff attorney for the Technology and Civil Liberties Program at the ACLU of Northern California, remarked. “When we started talking about this at the ACLU, we got laughed at by folks in political spaces when we proposed the idea of banning facial recognition.” Now, though, he adds, there are “more groups who are opposed to government surveillance at the local level…by an order of magnitude over what that was five or ten years ago. And I think that’s an important trend even though on the policy itself, the votes didn’t swing the right way this time.” 

In the next five years, we will see if those groups have the power to put the genie back in the bottle.

The post As anxiety about crime peaks, US cities look to surveillance tech. But does it actually work? appeared first on Coda Story.

]]>