Authoritarian Tech Newsletter - Coda Story https://www.codastory.com/newsletters/auth-tech/ stay on the story Thu, 30 Nov 2023 13:18:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 https://www.codastory.com/wp-content/uploads/2019/07/cropped-LogoWeb2021Transparent-1-32x32.png Authoritarian Tech Newsletter - Coda Story https://www.codastory.com/newsletters/auth-tech/ 32 32 For OpenAI’s CEO, the rules don’t apply https://www.codastory.com/newsletters/openai-ethics-board-altman/ Thu, 30 Nov 2023 13:18:11 +0000 https://www.codastory.com/?p=48563 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Palestinians face detention over “incitement” on social media, and Netanyahu welcomes Elon Musk despite his antisemitic posts on X.

The post For OpenAI’s CEO, the rules don’t apply appeared first on Coda Story.

]]>
Since my last newsletter, a shakeup at OpenAI somehow caused Sam Altman to be fired, hired by Microsoft, and then re-hired to his original post in less than a week’s time. Meet the new boss, literally the same as the old boss.

There are still a lot of unknowns about what went down behind closed doors, but the consensus is that OpenAI’s original board fired Altman because they thought he was building risky, potentially harmful tech in the pursuit of major profits. I’ve seen other media calling it a “failed coup”, which is the wrong way to understand what happened. Under the unique setup at OpenAI — which pledges to “build artificial general intelligence (AGI) that is safe and benefits all of humanity” — it is the board’s job to hold the CEO accountable not to investors or even to its employees, but rather to “all of humanity.” The board (alongside some current and former staff) felt Altman wasn’t holding up his end of the deal, so they did their job and showed him the door.

This was no coup. But it did ultimately fail. Even though Altman was part of the team that created this accountability structure, its rules apparently no longer applied to him. As soon as he left, his staff apparently threatened to quit en masse. Powerful people intervened and the old boss was back at the helm in time for Thanksgiving dinner. 

Now, OpenAI’s board is more pale, male and I dare say stale than it was two weeks ago. And Altman’s major detractors — Helen Toner, an AI safety researcher and strategy lead at Georgetown University’s Center for Security and Emerging Technology, and Tasha McCauley, a scientist at the RAND Corporation — have been shown the door. Both brought expertise that lent legitimacy to the company’s claims of prioritizing ethics and benefiting “all of humanity.” You know, women’s work. 

As esteemed AI researcher Margaret Mitchell wrote on X, “When men speak up abt AI&society, they gain tech opportunities. When non-men speak up, they **lose** them.” A leading scholar on bias and fairness in AI, Mitchell herself was famously fired by Google on the heels of Timnit Gebru, whose dismissal from Google was sparked by her critiques of the company’s approach to building AI. They are just a few of many women across the broader technology industry who have been fired or ushered out of powerful positions when they raised serious concerns about how technology might affect people’s lives.

I don’t know exactly what happened to the women who were once on OpenAI’s board, but I do know that when you have to do a ton of extra work simply to speak up, only to be shut down or shown the door, that’s a raw deal. 

On that note, who’s on Altman’s board now? Arguably, the biggest name is former U.S. Treasury Secretary Larry Summers, who used to be the president of Harvard University, but resigned amid fallout from a talk he gave in which he “explained” that women were underrepresented in the sciences because, on average, we just didn’t have the aptitude for the subject matter. Pick your favorite expletive and insert it here! Even though Summers did technically step down as president, the university still sent him off with an extra year’s salary. He has since continued to teach at Harvard, made millions working for hedge funds and become a special adviser at kingmaker venture capital firm Andreessen Horowitz. And now he gets to help decide the trajectory of what might be the most consequential AI firm in the world. That is a sweet deal.

The other new addition to the board is former Salesforce Co-CEO Bret Taylor, who was on the board of Twitter when it was still Twitter. There, Taylor played a major role in forcing Elon Musk to go through with his acquisition of the company, though Musk had tried to back out early in the process. This was good for Twitter’s investors and super terrible for everyone else, ranging from Twitter’s employees to the general public who had come to rely on the service as a place for news, critical debate and coordination in public emergencies. 

In Twitter’s case, there was no illusion about benefiting “all of humanity” — the board was told to act on investors’ behalf, and that’s what it did. It shows just how risky it is for us to depend on tech platforms run by profit-driven companies to serve as a quasi-public space. I worry that OpenAI will be next in line. And I don’t see this board doing anything to stop it.

GLOBAL NEWS

Thousands of Palestinians in the Israeli-occupied West Bank have been arrested since Oct. 7, some over things they’ve posted — or appear to have posted — online. One notable figure among them is Ahed Tamimi, a 22-year-old who has been a prominent advocate against the occupation since she was a teenager. Israeli authorities raided Tamimi’s home in early November and arrested her on accusations that she had written a post on Instagram inciting violence against Israeli settlers. The young woman’s family denied that Tamimi had posted the message, explaining that the post came from someone impersonating her, amid an online harassment campaign targeting the activist. Since her arrest, she has not been charged with any crime. On Tuesday, Tamimi’s name appeared on an official list of Palestinian detainees slated for release.

Israeli authorities have been quick to retaliate against anything that might look like antisemitic speech online — unless it comes from Elon Musk. The automotive and space-tech tycoon somehow managed to get a personal tour of Kfar Aza kibbutz — the scene of one of the massacres that Hamas militants committed on Oct. 7 — from no less than Prime Minister Benjamin Netanyahu himself this week. Just days prior, Musk had been loudly promoting an antisemitic conspiracy theory about anti-white hatred among Jewish people on X, describing it as “the actual truth.” Is Netanyahu not bothered by the growing pile of evidence that Musk is comfortable saying incredibly discriminatory things about Jewish people? As with Altman, the rules just don’t apply when you’re Elon Musk.

And there was a business angle for Musk’s visit to Israel. He has a habit of waltzing into cataclysmic crises and offering up his services. It’s always billed as an effort to help people, but there’s usually a thinly veiled ulterior geopolitical motive. While in Israel, he struck a deal that will allow humanitarian agencies in Gaza to use Starlink, his satellite-based internet service operated by SpaceX. Internet connectivity and phone service have been decimated by Israel’s war on Gaza, in which airstrikes have destroyed infrastructure and the fuel blockade has left telecom companies all but unable to operate. So Starlink could really help here. But in this case, it will only go so far. Israel’s communications ministry is on the other end of the agreement and has made it clear that access to the network will be strictly limited to aid agencies, arguing that a more flexible arrangement could allow for Hamas to take advantage. Journalists, local healthcare workers and just about everyone else will have to wait.

WHAT WE’RE READING

  • A study by Wired and the Integrity Institute’s Jeff Allen found that when the messaging service Telegram “restricts” channels that feature right-wing extremism and other forms of radicalized hate, they don’t actually disappear — they just become harder to “discover” for those who don’t subscribe. Vittoria Elliott has the story for Wired.
  • In her weekly Substack newsletter, crypto critic and Berkman Klein Center fellow Molly White offered a thoughtful breakdown of Silicon Valley’s “effective altruism” and “effective accelerationism” camps, which she writes “only give a thin philosophical veneer to the industry’s same old impulses.”

The post For OpenAI’s CEO, the rules don’t apply appeared first on Coda Story.

]]>
Fleeing war? Need shelter? Personal data first, please https://www.codastory.com/newsletters/conflict-refugees-data-surveillance/ Thu, 16 Nov 2023 14:24:50 +0000 https://www.codastory.com/?p=48355 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Hikvision offers ethnic minority “alerts” in Chinese university dining halls, and surveillance hits an all-time high in the West Bank.

The post Fleeing war? Need shelter? Personal data first, please appeared first on Coda Story.

]]>
More people have been displaced by violence and natural disasters over the past decade than ever before in human history, and the numbers — that already exceed 100 million — keep climbing. Between ongoing conflict in the Democratic Republic of Congo, Pakistan’s mass expulsion of people of Afghan origin and Israel’s bombardment of Gaza, millions more people have been newly forced to leave their homes since October. 

When people become displaced en masse, organizations like the U.N., with its World Food Program and refugee agency, will often step in to help. But today, sometimes before they distribute food or medicine, they typically collect refugees’ data. Fingerprinting, iris scans and even earlobe measurements are now a common requirement for anyone seeking to meet their basic needs.

This week I caught up with Zara Rahman, a tech and social justice researcher who studies the drive across international humanitarian and intergovernmental organizations like the U.N. and the World Bank to digitize our identities.

“Of course, U.N. agencies are trying to figure out how much food and what resources we need,” Rahman told me. But “the amount of information that is being demanded and collected from people in comparison to what is actually needed in order to provide resources is just wildly different.” 

In “Machine Readable Me: The Hidden Ways Tech Shapes Our Identities,” her new book on the global push to digitize our lives, Rahman looks at the history of data collection by governments and international agencies and what happens when their motives change or data falls into the wrong hands. Nazi Germany is a top pre-digital case study here — she has a great passage about how members of the Dutch resistance bombed Amsterdam’s civil registry office during World War II to prevent Nazis from using the registry to identify and persecute Jews.

She then leaps forward to Afghanistan, where U.S. occupying forces deployed data collection systems that were later seized by the Taliban when they skated back into power in 2021. These databases gave Taliban leadership incredibly detailed information about the lives of people who worked for the U.S. government — to say nothing of women, whose lives and opportunities have been entirely rewritten by the return to Taliban rule. We may never know the extent of the damage incurred here.

Data collection and identity systems are also used, or could potentially be used, to persecute and victimize people whose nationality is contested, like many of those being expelled right now from Pakistan. Rahman emphasized that what happens to these people may depend on who the state perceives them to be and whether they are seen as people who “should return to Pakistan at some point.” 

Rohingya Muslims, she reminded me, were famously denied citizenship and the documentation to match by the Myanmar government for generations. Instead, in the eyes of the state, they were “Bengalis” — an erroneous suggestion that they had come from Bangladesh. In 2017, hundreds of thousands of Rohingya people fled the Burmese military’s ethnic cleansing operations in western Myanmar and landed in Bangladesh, where the government furnished them with IDs saying that they were from Myanmar, thereby barring them from putting down roots in Bangladesh. In effect, both countries leveraged their identity systems to render the Rohingya people stateless and wash their hands of this population. 

What recourse do people have in such circumstances? For the very rich, these rules don’t apply. People with deep pockets rarely find themselves in true refugee situations, and some even purchase their way to citizenship — in her book, Rahman cites a figure from Bloomberg, which reported that “investor-citizens spent $2 billion buying passports in 2014.” But most of the tens of millions of people affected by these systems are struggling to survive — the financial and political costs of litigating or challenging authorities are totally out of bounds. And with biometric data a part of the package, the option of slipping through the system or somehow establishing yourself informally is too. Your eyes are your eyes and can be used to identify you forever.

GLOBAL NEWS

Facial recognition tech is a key tool in China’s campaign against ethnic Uyghurs. This isn’t news, but the particular ways in which Chinese authorities deploy such tools to persecute Uyghur people, most of whom are Muslim, continue to horrify me. It came to light recently that Hikvision, the popular surveillance camera maker that offers facial recognition software, won a state contract in 2022 to develop a system that conducts “Assisted Analysis Of Ethnic Minority Students.” It’s worth noting that Hikvision in the past has boasted of its cameras’ abilities to spot “Uyghur” facial features, a brag that helped the technology get blacklisted in the U.S. But while you can’t buy it here, it’s pretty common across Asia, Africa and even in the U.K. The recently leaked tenders and contracts, published on IPVM, show that the company developed tools that alerted Chinese authorities about university students who were “suspected of fasting” during Ramadan, as well as monitored travel plans, observation of holidays and even things like what books ethnic minority students checked out at the library. Paging George Orwell.

Israel is also doubling down on facial recognition and other hardcore surveillance tech, after its world-renowned intelligence system failed to help prevent the deadly attacks of October 7. In the occupied West Bank, Palestinians report their daily movements are being watched and scrutinized like never before. That’s saying a lot in places like the city of Hebron, which has been dotted with military checkpoints, watchtowers and CCTV cameras — some of which are supplied by Hikvision — for years now. In a dispatch this week for Wired, Tom Bennett wrote about the digital profiling and facial recognition surveillance database known as Wolf Pack that allows the military officers to pull up complex profiles on all Palestinians in the territory, simply by scanning their faces. In a May 2023 report, Amnesty International asserted that whenever a Palestinian person goes through a checkpoint where the system is in use, “their face is scanned, without their knowledge or consent.”

Some of the world’s most powerful tech companies are either headquartered or present in Israel. So the country’s use of technology to surveil Palestinians and identify targets in Gaza is a burning issue right now, including for engineers and tech ethics specialists around the world. There’s an open letter going around, signed by some of the biggest names in the responsible artificial intelligence community, that condemns the violence and the use of “AI-driven technologies for warmaking,” the aim of which, they write, is to “make the loss of human life more efficient.” The letter covers a lot of ground, including surveillance systems I mentioned above and Project Nimbus, the $1.2 billion deal under which Amazon and Google provide cloud computing services to the Israeli government and military. Engineers from both companies have been advocating for their employers to cancel that contract since it first became public in 2021. 

The letter also notes the growing pile of evidence of anti-Palestinian bias on Meta’s platforms. Two recent stand-out examples are Instagram’s threat to suspend the account of acclaimed journalist Ahmed Shihab-Eldin over a video he posted that showed Israeli soldiers abusing Palestinian detainees, and the shadowbanning of digital rights researcher Mona Shtaya after she posted a link to an essay she wrote for the Middle East Institute on the very same issue. Coincidence? Looking at Meta’s track record, I very much doubt it.

WHAT WE’RE READING

  • I’ve written a few times about how police in the U.S. have misidentified suspects in criminal cases based on faulty intel from facial recognition software. Eyal Press has a piece on the issue for The New Yorker this week that asks if the technology is pushing aside older, more established methods of investigation or even leading police to ignore contradictory evidence.
  • Peter Thiel is taking a break from democracy — and he won’t be bankrolling Trump’s 2024 presidential campaign. Read all about it in Barton Gellman’s illuminating profile of the industry titan for The Atlantic.

The post Fleeing war? Need shelter? Personal data first, please appeared first on Coda Story.

]]>
Wartime in the ‘digital wild west’  https://www.codastory.com/newsletters/israel-gaza-content-moderation-twitter/ Thu, 09 Nov 2023 14:03:10 +0000 https://www.codastory.com/stayonthestory/will-a-new-regulation-on-ai-help-tame-the-machine/ Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Musk taunts Wikipedia, Sri Lanka flirts with a new censorship tool, and Greek politicians continue to grapple with their spyware problem.

The post Wartime in the ‘digital wild west’  appeared first on Coda Story.

]]>
As Israel continues its advance into Gaza, the need for oversight and accountability around what appears on social media feels especially urgent. Forget for a minute all the stuff online that’s either fake or misinformed. There are reams of real information about this war that constantly trigger the censorship systems of Big Tech companies. 

Consider the subject of terrorism. The biggest players all have rules against content that comes from terrorist groups or promotes their agendas, many of which align with national laws. This might sound uncomplicated, but the governing entity in Gaza, for instance, is Hamas, a designated terror organization in the eyes of Israel and, even more importantly, the U.S., home to the biggest tech companies on earth. Digital censorship experts have expressed well-founded fears that between Big Tech’s self-imposed rules and regional policies like the EU’s Digital Services Act, companies could be censoring critical information such as evidence of war crimes or making it impossible for people in the line of fire to access vital messages.

Although the stakes here couldn’t be higher, we also know that content moderation work is too often relegated to tiny teams within a company or outsourced to third parties.

Companies are typically coy about how this works behind the scenes, but in August the Digital Services Act went into effect, requiring the biggest of the Big Techs to periodically publish data about what kinds of content they’re taking down in the EU and how they’re going about it. And last week, the companies delivered. The report from X showed some pretty startling figures about how few people are on the front lines of content moderation inside the company. It’s been widely reported that these teams were gutted after Elon Musk took over a year ago but I still wasn’t prepared for the actual numbers. The chart below shows how many people X currently employs with “linguistic expertise” in languages spoken in the EU.

X has expertise on fewer than half of the bloc’s official languages, and for most of them, it employs literally one or two people per language. The languages with teams in the double digits are probably explained by a combination of regulation, litigation and political threats that have tipped the scales in places like Germany, Brazil and France. But for a company with this much influence on the world, the sheer lack of people is staggering.

Industry watchers have jumped all over this. “There is content moderation for the English-speaking world, which is already not perfect, and there is the Digital Wild West for the rest of us,” wrote Roman Adamczyk, a network analyst who previously worked with the Institute for Strategic Dialogue. “Will this change in light of the 2024 elections in Finland, Lithuania, Moldova, Romania and Slovakia?” asked Mathias Vermeulen, director of the privacy litigation group AWO. Great question. Here are a few more, in no particular order:

What are people who speak Hungarian or Greek — of which there are about 13 million each in the EU — supposed to make of this? What about all the places in the EU where the Russian language has a big presence, sometimes of the fake news variety? What happens if the sole moderator for Polish gets the flu? Is there any recourse if the two moderators for Hebrew, whose jobs I seriously don’t envy right now, get into an argument about what counts as “terrorist” content or “incitement to violence”? These moderators — “soldiers in disguise” on the digital battlefield, as one Ethiopian moderator recently put it to Coda — have serious influence over what stays up and what comes down.

After reading accounts from moderators working through Ethiopia’s civil war, I shudder to think of what these staffers at X are witnessing each day, especially those working in Arabic or Hebrew. The imperative to preserve evidence of war crimes must weigh heavily on them. But ultimately, it will be the corporate overlords — sometimes forced by the hands of governments — who decide what gets preserved and what will vanish.

GLOBAL NEWS

Elon Musk has once again been taking potshots at the world’s largest online encyclopedia. Two weeks back, he poked fun at the Wikimedia Foundation’s perennial donation drive and then jokingly considered paying the foundation $1 billion to change the platform’s name to — so sorry — “Dickipedia.” It is hard to know where to begin on this one, except to say that while Wikipedia functions on a fraction of the budget that X commands, it takes things like facts and bias a lot more seriously than Musk does and supports 326 active language communities worldwide. In the meantime, Wikipedia’s fate in the U.K. still hangs in the balance. Regulators are sorting out the implementation of the country’s new Online Safety Act, which will require websites to scan and somehow remove all content that could be harmful to kids before it appears online. There’s a lot wrong with this law, including the fact that it will inspire other countries to follow suit.

One recent copycat is Sri Lanka, where the parliament is now considering a bill by the same name. Although proponents say they’re trying to help protect kids online, Sri Lanka’s Online Safety Bill treads pretty far into the territory of policing online speech, with an even broader mandate than its British counterpart. One provision aims to “protect persons against damage caused by communication of false statements or threatening, alarming or distressing statements.” Another prohibits “coordinated inauthentic behavior” — an industry term that covers things like trolling operations and fake news campaigns. A committee appointed by Sri Lanka’s president gets to decide what’s fake. Sanjana Hattotuwa, research director at the New Zealand-based Disinformation Project, has pointed out the clear pitfalls for Sri Lanka, where digital disinfo campaigns have been a hallmark of national politics for more than a decade. In an editorial for Groundviews, Hattotuwa argued that the current draft will lead to “vindictive application, self-serving interpretation, and a license to silence,” and predicted that it will position political incumbents to tilt online discourse in their favor in the lead up to Sri Lanka’s presidential election next year.

Greek lawmakers pushed through a ban on spyware last year, after it was revealed that about 30 people, including journalists and an opposition party leader, were targeted with Predator, a mobile surveillance software made by the North Macedonian company Cytrox. But efforts to get to the bottom of the scandal that started it all — who bought the spyware, and who picked the targets? — have been stymied, thanks in part to the new conservative and far-right elements in parliament. The new government has overhauled the independent committee that was formed to investigate the spyware scandal, in what opposition lawmakers called a “coup d’etat.” And now two of the committee’s original members are being investigated over allegations that they leaked classified information about the probe. When it comes to regulating — in this case, banning — spyware, EU countries probably have the best odds at actually making the rules stick. But what’s happened in Greece over the last 18 months shows that it’s still an uphill battle.

WHAT WE’RE READING

  • Wired’s Vittoria Elliott has a new report on the rise of third-party companies that provide what’s known in the tech industry as “trust and safety” services. A key takeaway of the piece is that when companies outsource this kind of work, it means they’re “outsourcing responsibilities to teams with no power to change the way platforms actually work.” That’s one more thing to worry about.
  • Beloved sci-fi writer and open internet warrior Cory Doctorow brought us a friendly breakdown this week of some really important legal arguments being made around antitrust law and just how harmful Amazon is to consumers and sellers alike. In a word, says Doctorow, it is “enshittified.” Read and learn.

The post Wartime in the ‘digital wild west’  appeared first on Coda Story.

]]>
Will a new regulation on AI help tame the machine? https://www.codastory.com/newsletters/artificial-intelligence-bias-regulation/ Fri, 03 Nov 2023 13:05:02 +0000 https://www.codastory.com/?p=47978 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Gazans face an internet blackout, and mobile spyware strikes again in India.

The post Will a new regulation on AI help tame the machine? appeared first on Coda Story.

]]>
About a year ago, police outside Atlanta, Georgia, pulled over a 29-year-old Black man named Randal Reid and arrested him on suspicion that he had committed a robbery in Louisiana — a state that Reid had never set foot in. After his lawyers secured Reid’s release, they found telltale signs that he’d been arrested due to a faulty match rendered by a facial recognition tool. 

As revealed by The New York Times, the Louisiana sheriff’s office that had ordered Reid’s arrest had a contract with Clearview AI, the New York-based facial recognition software company that allows clients to match images from surveillance video with the names and faces of people they wish to identify, drawing on a database containing billions of photos scraped from the internet. Reid spent six days in jail before authorities acknowledged their mistake.

Reid is just one among a growing list of people in the U.S. who have been through similar ordeals after police misidentified them using artificial intelligence. In nearly all reported cases, the people who were targeted are Black, and research has shown over and over again that these kinds of software tend to be less accurate when they try to identify the faces of people with darker skin tones. Yet police in the U.S. and around the world keep using these systems — because they can.

But there’s a glimmer of hope that the use of technology by law enforcement in the U.S. could start to be made more accountable. On Monday, the White House dropped an executive order on “safe, secure and trustworthy” AI, marking the first formal effort to regulate the technology at the federal level in the U.S.

Among many other things, the order requires tech companies to put their products through specific safety and security tests and share the results with the government before releasing their products into the wild. The testing process here, known as “red teaming,” is one where experts stress test a technology and see if it can be abused or misused in ways that could harm people. In theory at least, this kind of regime could put a stop to the deployment of tools like Clearview AI’s software, which misidentified Randal Reid.

If done well, this could be a game changer. But in what seems like typical U.S. fashion, the order feels more like a roadmap for tech companies than a regulatory regime with hard restrictions. I exchanged emails about it with Albert Fox Cahn, who runs the Surveillance Tech Oversight Project. From his standpoint, red teaming is no way to strike at the roots of the problems that AI can pose for the public interest. “There is a growing cadre of companies that are selling auditing services to the highest bidder, rubber stamping nearly whatever the client puts forward,” he wrote. “All too often this turns into regulatory theater, creating the impression of AI safeguards while leaving abusive practices in place.” Fox Cahn identified Clearview AI as a textbook example of the kinds of practices he’s concerned about.

Why not ban some kinds of AI altogether? This is what the forthcoming Artificial Intelligence Act will do in the European Union, and it could be a really good model to copy. I also chatted about it with Sarah Myers West, managing director of the AI Now Institute. She brought up the example of biometric surveillance in public spaces, which soon will be flat-out illegal in the EU. “We should just be able to say, ‘We don’t want that kind of AI to be used, period, it’s too harmful for the public,’” said West. But for now, it seems like this is just too much for the U.S. to say.

GLOBAL NEWS

The internet went dark in Gaza this past weekend, as Israeli forces began their ground invasion. More than 9,000 people have already been killed in nearly a month of aerial bombardment. With the power out and infrastructure reduced to rubble, the internet in Gaza has been faltering for weeks. But a full-on internet shutdown meant that emergency response crews, for instance, were literally just racing towards explosions wherever they could see and hear them, assuming that people would soon be in need of help. U.S. senior officials speaking anonymously to The New York Times and The Washington Post said they had urged Israeli authorities to turn the networks back on. By Sunday, networks were online once again.

Elon Musk briefly jumped into the fray, offering an internet hookup to humanitarian organizations in Gaza through his Starlink satellite service. But as veteran network analyst Doug Madory pointed out, even doing this would require Israel’s permission. I don’t think Musk is the best solution to this kind of problem — or any problem — but satellite networks could prove critical in situations like these where communication lines are cut off and people can’t get help that they desperately need. Madory had a suggestion on that too. Ideally, he posted on X, international rules could mandate that “if a country cuts internet service, they lose their right to block new entrants to the market.” Good idea.

Opposition politicians and a handful of journalists in India have become prime surveillance targets, says Apple. Nearly 20 people were notified by the company earlier this week that their iPhones were targeted in attacks that looked like they came from state-sponsored actors. Was Prime Minister Narendra Modi’s Bharatiya Janata Party behind it? It’s too soon to say, but there’s evidence that the ruling government has all the tools it needs to do exactly that. In 2021, the numbers of more than 300 Indian journalists, politicians, activists and researchers turned up on a leaked list of phones targeted with Pegasus, the notoriously invasive military-grade spyware made by NSO Group. At Coda, we reported on the fallout from the extensive surveillance for one group of activists on our podcast with Audible.

WHAT WE’RE READING

  • My friend Ethan Zuckerman wrote for Prospect magazine this week about the spike in disinformation, new measures that block researchers from accessing social media data, and lawsuits targeting this type of research. These factors, he says, are taking us to a place where what happens online is, in a word, “unknowable.”
  • Peter Guest’s excellent piece for Wired about the U.K.’s AI summit drolly described it as “set to be simultaneously doom-laden and underwhelming.” It’s a fun read and extra fun for me, since Pete will be joining our editorial team in a few weeks. Keep your eyes peeled for his stuff, soon to be coming from Coda.

The post Will a new regulation on AI help tame the machine? appeared first on Coda Story.

]]>
How the new UK tech law hurts Wikipedia https://www.codastory.com/newsletters/better-internet-wikipedia/ Thu, 26 Oct 2023 18:01:51 +0000 https://www.codastory.com/?p=47486 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us. Also in this edition: Meta keeps mistreating content from Palestine, Venezuelans cast primary ballots (despite censorship) and Apple has a problem with Jon Stewart.

The post How the new UK tech law hurts Wikipedia appeared first on Coda Story.

]]>
It has been an incredibly difficult three weeks in the world, and the internet shows it. In the last couple of newsletters, I’ve noted just how hard it is to find reliable information on the social web right now, where everything seems to revolve around attention, revenue and shock value, and verified facts are few and far between. So this week, I’m turning my attention to a totally different part of the internet: Wikipedia. 

It’s been on my mind lately because of the proposed new online safety law in the U.K. that will set strict age requirements for young people online and require websites to scan and somehow remove all content that could be harmful to kids before it appears online. In a recent blogpost for the Wikimedia Foundation — the non-profit that supports Wikipedia — Vice President for Global Advocacy Rebecca MacKinnon wrote that by requiring sites to scan literally everything before it gets posted, the bill could upend the virtual encyclopedia’s bottom-up approach to content creation. As she put it, the law could destroy Wikipedia’s system “for maintaining encyclopedic integrity.”

You may be wondering precisely what “encyclopedic integrity” means at Wikipedia, where the article on the Marvel Comics character Spider-Man cites almost twice as many sources as the article for the Republic of Chad, a country of an estimated 18.5 million people. I get it. Wikipedia, by its own admission, has had problems with an overrepresentation of the interests of nerdy white male American 20-somethings who have too much time on their hands. But these people also really care about what they post online, and they have created an effective cooperative system for collecting, verifying and building knowledge. The system is totally dependent on the good will of thousands of contributors, and it is wholly decentralized — there are Wikipedia communities across the globe who share some basic principles, but decide together how they’ll handle contributions that could violate the law, offend readers or anything in between. In sharp contrast to corporate social media spaces, where attention is the driver of all things, this is a totally different way to “scale up” — more like scaling out — and it has led to a dramatically different kind of information resource.

I recently spoke with two Wikipedia volunteers in Wales, who are seriously worried about the effects that the U.K. bill might have on Wikipedia’s Welsh-language site, which is the only Wikipedia community that exists almost entirely within the jurisdiction of the U.K. Robin Owain and Jason Evans explained to me just how essential Wikipedia has become for Welsh speakers — with 90 million views in the last 12 months, Welsh Wikipedia is the largest and most popular Welsh-language website on the internet. Young people are a big part of this, and the secondary school system in Wales works actively with the community to engage high school students in building up material on the site. 

For Owain and Evans, this is fundamental to their purpose. “We want young people to feel as though the internet’s something that you can interact with,” Evans said. But the U.K.’s new online safety law could take that away. The two surmise that once the bill is enacted, it will be nearly impossible to allow people under 18 to contribute to the site. It could, as Evans put it, “really reinforce the idea that the internet is just a place to get information, that it’s not something you can be a part of.” 

They also worry that the bill’s requirements regarding content could leave contributors fearful of violating the law. “If there’s anything contentious, anything that has adult themes or strong language, no matter how true something might be, or how factual, there will be a concern that if it’s left on Wiki, there’s a risk that young people will see it and we’ll fall foul of the bill,” said Evans. “That in itself does create an atmosphere where you are essentially censoring Wikipedia, and that goes against everything Wikipedia is about.”

It also stings, the two noted, since the U.K. bill was written with the biggest of Big Tech companies in mind. For some reason, its authors couldn’t be persuaded to make a carve-out for projects like Wikipedia. But Owain has some hope that Welsh people and the Welsh government — a Labour party-dominated legislature that does ultimately answer to the British parliament — just might have something to say about it. 

“I should think the whole of Wales would stand up as one and say, ‘Oh! We will access Wikipedia!’ and the Welsh government will support it,” Owain said, raising a fist in the air. I hope he’s right.

Pro-Palestinian messages are getting shadowbanned and horribly mistranslated on social media. Over the past two weeks, multiple journalists, artists, Instagram influencers and even New York Times reporter Azmat Khan reported that their posts containing words like “Palestine” and “Gaza” simply weren’t reaching followers. To make matters worse, a handful of Instagram users found that the platform was spontaneously inserting the word “terrorist” into its machine-translations of the word “Palestinian” from Arabic to English. This reminds me of 2021, when the Al-Aqsa Mosque in Jerusalem was mistakenly labeled as a “dangerous organization” by the same platform. The takeaway here is that Meta, Facebook and Instagram’s parent company, has told its computers to use things like the U.S. government’s list of designated terror groups in order to identify content that could spark violence. This might sound reasonable on the surface, but when you throw in a little artificial intelligence and some plain old human bias, it can get ugly.

Meta has a long history of mistreating speech about Palestine, and while the company is always quick to blame the tech (it’s a “glitch,” the execs say), the evidence suggests that it is not that simple. Between the U.S. government’s list of designated terror groups, Meta’s own list of “dangerous individuals and organizations,” the EU’s Digital Services Act, soft pressure from the U.S. and Israel alike, and a set of community standards that seems to get more complicated by the day, it seems like the decks are stacked against Palestinians who are just trying to say what they feel right now. I will keep my eyes peeled for further “glitches” in the weeks ahead.

Venezuela saw a smattering of web outages over the weekend, during  the political opposition’s presidential primary election, the first to be held since 2012. This was no ordinary vote — public trust in the country’s electoral system is extraordinarily low, due to a history of election fraud allegations and the ruling United Socialist Party’s routine efforts to block bids by its opponents. Opposition organizers created an independent entity, the National Primary Commission, to oversee the election and set up polling places in churches and at people’s homes, rather than using publicly managed buildings like schools and community centers. Over the weekend, the network monitoring group NetBlocks documented huge drops in connectivity in Caracas, and Venezuela Sin Filtro, a censorship monitoring group, reported that websites which listed polling places were inaccessible on most telecom networks. The group also presented evidence that the systems used to count the votes — an estimated 1.5 million people cast their ballots, both inside and outside the country — were hit with cyberattacks. Out of a crowded field, María Corina Machado, a conservative former lawmaker, had won more than 90% of the votes counted by mid-week.

Apple has a problem with Jon Stewart. Last week, the cherished TV comic abruptly canceled the third season of “The Problem with Jon Stewart,” his show on streaming service Apple TV, after the company reportedly pushed back on the script for an episode in which he planned to discuss AI and China. We don’t hear much about Apple in stories about content control and Big Tech, but between the App Store, Apple TV and Apple Podcasts, the company has a huge amount of discretion over what kinds of media and apps its users can most easily access. And when it comes to China — home to the Foxconn factory where half of the world’s iPhones are manufactured — the company has often been quick to bow to censorship demands. There’s been no further information about what exactly Stewart had planned to talk about, but it’s easy to imagine that it might have had Apple’s overlords worried about offending their Chinese business partners.

WHAT WE’RE READING

  • My friend Oiwan Lam, an intrepid Hong Konger who has kept her ear to the ground and her finger on the pulse of the Chinese internet through all the political ups and downs of the past decade, translated a fascinating exclusive interview by a YouTuber known as Teacher Li with a censorship worker from mainland China. Give it a read.
  • In a new essay for Time magazine, Heidy Khlaaf, who specializes in AI safety in high-stakes situations, says we should regulate AI in the same way we do nuclear weapons.
  • The fraud trial of Sam Bankman-Fried, founder of the cryptocurrency exchange FTX, is now well underway in New York. This piece in The Ringer puts you right in the courtroom.

The post How the new UK tech law hurts Wikipedia appeared first on Coda Story.

]]>
Losing lifelines in Gaza https://www.codastory.com/newsletters/israel-gaza-electricity/ Thu, 19 Oct 2023 13:48:16 +0000 https://www.codastory.com/?p=47244 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Sudan’s Omar al-Bashir might be on TikTok, and dating apps are becoming dangerous in Uganda.

The post Losing lifelines in Gaza appeared first on Coda Story.

]]>
NO BATTERY LEFT

It has been more than a week since Israel cut off electricity, water, fuel and food shipments for 2.3 million people in Gaza, as part of its response to the unprecedented attacks launched by Hamas on October 7. Internet shutdowns have become an all-too-common tool of control in conflict situations around the world. But an enforced power cut takes it to another level entirely. It makes network shutdowns look like child’s play.

UN human rights chief Volker Turk, Human Rights Watch and the International Committee of the Red Cross have all said these cuts amount to a violation of international humanitarian law — in other words, a war crime.

Yet the power is still out. The blackout has caused a cascade of problems for all kinds of systems, from water pumps and sanitation to telecommunications networks, in an already catastrophic situation. Under bombardment by Israel, more than 3,000 Gazans have been killed, thousands have been injured and, according to the United Nations, about a million people displaced. 

It is getting more and more difficult for people in Gaza to stay in contact with each other, and with people outside the territory. I spoke with Asmaa Alkaisi, a recent graduate of the University of Washington’s international studies school, who came to the U.S. from Gaza, where she has lived most of her life. 

As recently as two weeks ago, Alkaisi had a daily habit of checking in with her family, most importantly her mother, on video calls. But over the past 10 days, she has been unable to reach them. She has resorted to checking lists of the dead and missing, to see if their names appear.

“If you don’t see their names in the lists of missing or killed ones, then you know that they’re OK,” she said. It has become almost impossible to reach people locally. “I have lost contact with my best friend for 11 days now,” she said. “I honestly don’t know if she’s still alive.”

She explained that reports on television have new importance. “I found out from the news and the videos that my house was completely destroyed and leveled to the ground,” she told me. “I didn’t know that from my family, I found out from the news.”

At 39 years old, Alkaisi has lived through many periods of intense conflict in Gaza, but this “tops everything we have ever been through,” she said. She told me about a classmate of hers in the U.S. who once asked if Gazans “get used to” living with the looming threat of military aggression from Israel. The question shocked her.

“Every time this happens, it brings back all the trauma, it is as if it’s the first time it is happening,” she said. “We’re all shocked, we’re all in fear, we’re all petrified of the situation. You could be the next target. That’s more scary than anything in the world.”

And just like everyone else in the territory, journalists are facing terrifying, life-threatening circumstances. The BBC’s Rushdi Abu Alouf wrote on Tuesday about his own struggle to report on the devastation while trying to keep his family safe. With so much of what is happening on the ground being called into question by actors on all sides, these accounts really matter, and they will be harder and harder to capture and preserve as the situation worsens. 

I looked at a different part of this issue last week, focusing on the wreckless spread of disinformation by people who are not on the ground. But I shied away from the most consequential reports, like the gut-wrenching — but unsupported — allegation that babies in Israel were decapitated by Hamas, thinking it would be better not to repeat this bloody narrative, lest it be perpetuated.

My former colleague Reem Al-Masri, a media policy and disinformation researcher from Jordan, called me out on this. “Yes, social media is fertile ground for disinformation, but inaccurate information is only as harmful as its reach,” she wrote in an email. “We cannot treat misinformation that stays within the galaxy of social media the same way once it has made its way to officials,” she wrote, referring to U.S. President Joe Biden. Both Israeli and U.S. officials repeated this story, only to acknowledge later that they had no evidence to support it. This kind of disinformation is uniquely dangerous, Reem cautioned, because it affects how states and other actors make wartime decisions. She’s right. Thank you, Reem.

Hamas is abusing Facebook’s livestream feature. The families of several of the nearly 200 Israelis being held hostage by Hamas have reported that their captors are breaking into their loved ones’ Facebook accounts and in some cases livestreaming attacks or messages from wherever victims are being held. The account breach at the root of this is one thing, which unfortunately isn’t a new tactic — I’ve seen police do this in situations where colleagues have been arrested or detained. And this particular use of livestream calls to mind mass shootings that have been broadcast in the same way, most famously the massacre of 51 people at two mosques in Christchurch, New Zealand, in 2017. Facebook’s parent company Meta says it’s got a war room of people fluent in Arabic and Hebrew who are reviewing posts and trying to make game-time decisions on what should stay up and what should come down — this is good, though these efforts have pitfalls of their own, as Meta’s auditors noted a few years back. But there’s no way to “review” a livestream. At this point, if I could make them get rid of the feature, I would.

Is Sudan’s Omar al-Bashir back in action? Or is it just the AI talking? While everyone in the West seems to be watching Israel and Palestine, the conflict in Sudan continues without relent. Last week, the BBC dug into The Voice of Sudan, a viral TikTok account that since August has been posting audio missives that it claims are leaked recordings from former President Omar al-Bashir, who was ousted following mass protests in 2019. This is a real eyebrow-raiser, since al-Bashir hasn’t been seen in public for more than a year. But through The Voice of Sudan account, he is apparently speaking again, sounding in good health and criticizing the Sudanese army.

Forensics experts who’ve studied the recordings say that they display hallmarks of deep fakes and that they probably were made using an off-the-shelf artificial intelligence “voice cloning” tool that could capture audio from the former president’s actual speeches and then use that material to generate convincing imitations of him. The reporters talked with Mohamed Suliman, a Sudanese AI researcher at Northeastern University whose work I’ve highlighted in the past. “What’s alarming is that these recordings could also create an environment where many disbelieve even real recordings,” he told them. This is a really good point, and it’s instructive for this moment, far beyond Sudan. With so many convincing fakes making the rounds, it seems easier every day to question what’s real.

Dating apps are becoming dangerous in Uganda. The country’s updated law that criminalizes homosexuality has been on the books for a few months now, and public data shows that 17 people were arrested under the law in June and July. Two of them were “caught” expressing an LGBTQ identity — which is now literally a crime in Uganda — on dating apps. The Kampala-based Human Rights Awareness and Promotion Forum found that in both instances, the gay men using dating apps were effectively entrapped by another user who then reported them to police.

WHAT WE’RE READING

  • The Guardian published an explosive investigation of Amazon’s warehouses in Saudi Arabia, where dozens of Nepali workers told reporters they were tricked by recruiters, forced to work under harsh conditions, laid off and then made to pay sky-high fees in order to return home.
  • Rest of World talked with Meredith Whittaker, president of Signal, about how governments from Brazil to India and now the U.K. have put the future of the privacy-first messaging app on the line.
  • Writing for The Atlantic, acclaimed AI reporter Karen Hao dug deep into the critical battle playing out between the U.S. and China over tech export controls and who owns the future of AI. Don’t miss this one.

The post Losing lifelines in Gaza appeared first on Coda Story.

]]>
How Big Tech is fueling — and monetizing — false narratives about Israel and Palestine https://www.codastory.com/newsletters/how-big-tech-is-fueling-and-monetizing-false-narratives-about-israel-and-palestine/ Fri, 13 Oct 2023 13:16:51 +0000 https://www.codastory.com/?p=47123 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: African tech workers take on Big Tech, Manipur bans violent images online, and the U.N. is “tech-washing” Saudi Arabia.

The post How Big Tech is fueling — and monetizing — false narratives about Israel and Palestine appeared first on Coda Story.

]]>
THE FOG OF DIGITAL DISINFORMATION

I have few words for the atrocities carried out by Hamas in Israel since October 7, and the horrors that are now unfolding in Gaza.

I have a few more for a certain class of social media users at this moment. The violence in Israel and Palestine has triggered what feels like a never-ending stream of pseudo-reporting on the conflict: allegations, rumors and straight up falsehoods about what is happening are emerging at breakneck speed. I’m not talking about posts from people who are actually on the ground and may be saying or reporting things that are not verified. That’s the real fog of war. Instead, I’m talking about posts from people who jump into the fray not because they have something urgent to report or say, but just because they can.

Social media has given many of us the illusion of total access to a conflict situation, a play-by-play in real time. In the past, this was enlightening — or at least it felt that way. During the Gaza War in 2014, firsthand civilian accounts were something you could readily find on what was then called Twitter, if you knew where to look. I remember reading one journalist’s tweets about her desperate attempt to flee Gaza at the Rafah border crossing, amid heavy shelling by Israeli forces — her story stuck with me for years, returning to my mind whenever Gaza came up. These kinds of narratives may still be out there, but they are almost impossible to find amidst the clutter. And this time around, those stories from Gaza could disappear from the web altogether, now that Israel has cut off electricity in the territory, and internet access there is in free fall.

This illusion of being close to a conflict, of being able to understand its contours from far away is no longer a product of carefully reported news and firsthand accounts on social media. Sure, there was garbage out there in 2014, but nearly a decade on, it feels as if there are just as many posts about war crimes that never happened as there are about actual atrocities that did. Our current internet, not to mention the state of artificial intelligence, makes it too easy to spread misinformation and lies. 

On October 9, tens of thousands of people shared reports that Israeli warplanes had bombed a historic church in Gaza, complete with photos that could convince anyone who hasn’t actually been to that site. The church itself posted on Facebook to discredit the reports and assure people that it remains untouched. Conflict footage from Syria, Afghanistan, and as far away as Guatemala has been “recycled” and presented as contemporary proof of brutalities committed by one side or the other. And of course there are the “videos” of airstrikes that turned out to be screengrabs from the video game “Arma 3.” Earnest fact-checking outfits and individual debunkers have rushed in to correct and inform, but it’s not clear how much difference this makes. People look to have their biases confirmed, and then scurry on through the digital chaos.

Some are even posting about the war for money. Speaking with Brooke Gladstone of “On The Media” on October 12, tech journalist Avi Asher-Shapiro pointed out that at the same time that X has dismissed most of its staff who handled violent and false content on the platform, it has created new incentives for this kind of behavior by enabling “creators” to profit from the material they post. So regardless if a post is true or not, the more likes, clicks and shares it gets, the more money its creator rakes in. TikTok offers incentives like this too.

While X appears to be the unofficial epicenter of this maelstrom, the disinformation deluge is happening on Meta’s platforms and TikTok too. All three companies are now on the hook for it in the European Union. EU Commissioner Thierry Breton issued a series of public letters to their CEOs, pointing out that under the bloc’s  Digital Services Act, they have to answer to regulatory authorities when they fail to stop the spread of content that could lead to actual harm.

The sheer volume of disinformation is hard to ignore. And it is an unconscionable distraction from the grave realities and horror of the war in Gaza.

In pursuit of mass scale, the world’s biggest social media companies designed their platforms to host limitless amounts of content. This is nearly impossible for them to oversee or manage, as the events in Israel and Palestine demonstrate. Yet from Myanmar and Sudan to Ukraine and the U.S., it has been proven again and again that violent material on social media can trigger acts of violence in real life, and that people are worse off when the algorithms get the run of the place. The companies have never fully gotten ahead of this issue. Instead, they have cobbled together a combination of technology and people to do the work of identifying the worst posts and scrubbing them from the web. 

The people — content moderators — typically review hundreds of posts each day, from videos of racist diatribes to beheadings and sexual abuse. They see the worst of the worst. If they didn’t, the platforms would be replete with this kind of material, and no one would want to use them. That is not a viable business model.

Despite the core need for robust content moderation, the Big Techs outsource most of it to third-party companies operating in countries where labor is cheap, like India or the Philippines. Or Kenya, where workers report being paid between $1 and $4 per hour and having limited access to counseling — a serious problem in a job like this.

This week, Coda Story reporter Erica Hellerstein brought us a deep dive on the lives of content moderation workers in Nairobi who over the past several months have come together to push back on what they say are exploitative labor practices. More than 180 content moderators are suing Meta for $1.6 billion over poor working conditions, low pay and what they allege was unfair dismissal after Meta switched contracting companies. Workers have also voted to form a new trade union that they hope will force big companies like Meta, and outsourcing firms like Sama, to change their ways. Erica writes:

“While it happens at a desk, mostly on a screen, the demands and conditions of this work are brutal. Current and former moderators I met in Nairobi in July told me this work has left them with post-traumatic stress disorder, depression, insomnia and thoughts of suicide.

These workers are reaching a breaking point. And now, Kenya has become ground zero in a battle over the future of content moderation in Africa and beyond. On one side are some of the most powerful and profitable tech companies on earth. On the other are young African content moderators who are stepping out from behind their screens and demanding that Big Tech companies reckon with the human toll of their enterprise.”

Odanga Madung, a Kenya-based journalist and a fellow at the Mozilla Foundation, believes the flurry of litigation and organizing represents a turning point in the country’s tech labor trajectory. In his words: “This is the tech industry’s sweatshop moment.” Don’t miss this terrific, if sobering read.

Images of violence are also at issue in Manipur, India, where a new government order has effectively banned people from posting videos and photos depicting acts of violence. This is serious because Manipur has been immersed in waves of public unrest and outbursts of ethnic violence since May. After photos of the slain bodies of two students who had gone missing in July surfaced and went viral on social media last month, authorities shut down the internet in an effort to stem unrest. In the words of the state government, the new order is intended as a “positive step towards bringing normalcy in the State.” But not everyone is buying this. On X yesterday, legal scholar Apar Gupta called the order an attempt to “contour” media narratives that would also “silence the voices of the residents of the state even beyond the internet shutdown.”

The U.N. is helping Saudi Arabia to “tech-wash” itself. This week, officials announced that the kingdom will host the world’s biggest global internet policy conference, the Internet Governance Forum (IGF), in 2024. This U.N.-sponsored gathering of governments, corporations and tech-focused NGOs might sound dull — I’ve been to a handful of them and can confirm that some of it is indeed a yawn. But some of it really matters. The IGF is a place where influential policymakers hash out ideas for how the global internet ought to work and how it can be a positive force in an open society — or how it can do the opposite. After China and Iran, I can think of few places that would be worse to do this than Saudi Arabia, a country that uses technology to exercise authoritarianism in more ways than we probably know.

The post How Big Tech is fueling — and monetizing — false narratives about Israel and Palestine appeared first on Coda Story.

]]>
How AI is supercharging political disinformation ops https://www.codastory.com/newsletters/how-ai-is-supercharging-political-disinformation-ops/ Thu, 05 Oct 2023 13:17:18 +0000 https://www.codastory.com/?p=46912 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Russia hands a hefty prison sentence to a YouTuber and critics pan the new Elon Musk biography

The post How AI is supercharging political disinformation ops appeared first on Coda Story.

]]>
Were Slovakia’s elections rigged? Or was that just the artificial intelligence talking? Two days before Slovakians went to the polls last week, an explosive post made the rounds on Facebook. It was an audio recording of Progressive Slovakia party leader Michal Simecka telling a well-known journalist about his plan to buy votes from the country’s marginalized Roma minority. Or at least, that is what it sounded like. There was sufficient reason to believe that Simecka might have been desperate enough to do whatever it took to win the election — his party had been polling neck and neck against that of former Prime Minister Robert Fico, who resigned from the job back in 2018 amid anti-corruption protests following the murders of journalist Jan Kuciak and his fiancee Martina Kusnirova.

Simecka and the journalist who featured in the audio clip both quickly called it a fake, and fact-checking groups backed up their claims, noting that the digital file showed signs of having been manipulated using AI. But they were in a tough spot — the recording emerged during the 48-hour pre-polling period in which the media and politicians are restricted by law from speaking about elections at all. In the end, Progressive Slovakia lost to Fico’s Smer-SD party, and the political winds have quickly shifted. Fico ran on a populist platform, pledging that his government would “not give a single bullet” to Ukraine. Already heeding Fico’s word, the sitting president opposed a new military aid package for Ukraine just yesterday. And now Fico is expected to forge an alliance with Hungary’s Viktor Orban, the only EU head of state who has sided with Russia since the war began.

The possibility that a piece of evidence was fabricated using AI throws a new digital wrench into the already chaotic and oversaturated media landscape that all voters face in any election cycle. Slovakia isn’t the first country to run into this problem, and it definitely won’t be the last. Similar circumstances are expected in the run-up to Poland’s parliamentary elections later this month, where the war in Ukraine will very much be on the ballot, and where a victory for the right-wing Law and Justice party could add to Orban’s growing camp.

While the debunked audio clip in Slovakia was dutifully garnished with a fact-check label indicating that it may have been fabricated, it’s still making the rounds on Facebook. 

In fact, Meta (owner of Facebook, Instagram and Threads) and Google (owner of YouTube) have both indicated in recent months their plans to roll back some of the disinformation-busting efforts that they trotted out following the 2016 election in the U.S. But it is X, formerly known as Twitter, that is leading in the race to the bottom — every week, we see more signs that it has little interest in enforcing its rules on disinformation. 

Even the EU itself has brought this up: Last week, European Commission Vice President Vera Jourova called X out on the issue. “Russian propaganda and disinformation is still very present on online platforms. This is not business as usual; the Kremlin fights with bombs in Ukraine, but with words everywhere else, including in the EU,” Jourova said.

Although I was never all that convinced by their fact-checking efforts, it doesn’t help that the tech giants seem to have thrown up their hands on the issue. It leaves me almost nostalgic for a time when all we had to deal with was straight-up false or racist messages flooding the zone. Turns out, things could and did get worse. 2024, here we come.

GLOBAL NEWS

A Russian blogger was sentenced to eight and a half years in prison after being convicted of reporting “fake” news about Russian military actions in Ukraine. This type of journalism became a crime in Russia shortly after Russian forces launched the full-scale invasion of Ukraine in February 2022. Aleksandr Nozdrinov was arrested not long after the war began, and was finally dealt a sentence this week. Nozdrinov maintained a YouTube channel where he regularly posted video evidence of police corruption and malfeasance for an audience of more than 34,000 subscribers. According to the Committee to Protect Journalists, Nozdrinov denies having posted the material cited by prosecutors. He believes that the case against him was fabricated by authorities intent on targeting him in retaliation for his anti-corruption activities on YouTube.

Monday marked the fifth anniversary of the murder of Washington Post columnist Jamal Khashoggi, a Saudi exile and frequent critic of the Saudi Arabian regime. There is little doubt that Khashoggi’s gruesome killing inside the Saudi consulate in Istanbul came at the behest of Crown Prince Mohammed bin Salman. It came out later on that Khashoggi and some of his closest family members and colleagues were targeted with Pegasus, the notoriously invasive mobile phone spyware built by the Israeli firm NSO Group and used to spy on journalists in more than 50 countries, from Mexico to Morocco to India. The digital dimensions of Saudi Arabia’s tactics of repression don’t stop here, and they certainly are not news. But they do bear repeating.

Researchers in Australia think anti-Indigenous narratives on social media could swing an upcoming referendum. Tomorrow, Australians will vote on whether or not the country should establish a body that would advise the government on policy decisions affecting Aboriginal and Torres Strait Islander communities. A year ago, public opinion polls indicated that most Aussies — including Prime Minister Anthony Albanese — were in favor of the measure. But that has changed in recent months, and social science researchers say viral, racialized anti-Indigenous messaging campaigns on X and TikTok might have something to do with it. The Conversation is running a series on the issue — they’re worth a read.

WHAT I’M NOT READING: THE NEW MUSK BIOGRAPHY

Instead of reading Walter Isaacson’s new biography of Elon Musk, I have been lapping up the reviews and emoji-hearting other people’s dedication to pointing out everything that somehow failed to make the cut in this 670-page “insight-free doorstop of a book” (Gary Shteyngart’s words, not mine).

In the tome’s final pages, Isaacson writes: “Sometimes great innovators are risk-seeking man-children who resist potty training.” Um, what? As Jill Lepore wrote in The New Yorker: “This is a disconcerting thing to read on page 615 of a biography of a fifty-two-year-old man about whom a case could be made that he wields more power than any other person on the planet who isn’t in charge of a nuclear arsenal.” Since Isaacson didn’t, Lepore took it upon herself to school readers on some of the harsh political realities of apartheid-era South Africa where Musk grew up, noting that his maternal grandfather apparently moved the family from Canada to South Africa because of apartheid. She touches on grandpa’s openly antisemitic views, which Isaacson somehow writes off as “quirky.”

The book also has some pretty serious whoopsies when it comes to details about Musk’s financial moves. In Financial Times columnist Bryce Elder’s acid assessment: “When it comes to money, Isaacson is more a transcriber than a biographer.” Eesh.

Writing for The Atlantic, Sarah Frier had what feels to me like the truest line: “We don’t need to understand how he thinks and feels as much as we need to understand how he managed to amass so much power, and the broad societal impact of his choices — in short, how thoroughly this mercurial leader of six companies has become an architect of our future.” 

The post How AI is supercharging political disinformation ops appeared first on Coda Story.

]]>
Why are AI software makers lobbying for kids’ online safety laws? https://www.codastory.com/newsletters/why-are-ai-software-makers-lobbying-for-kids-online-safety-laws/ Thu, 28 Sep 2023 14:44:27 +0000 https://www.codastory.com/?p=46737 THINK OF THE CHILDREN Last week, the U.K. passed the Online Safety Bill, a law that’s meant to help snuff out child sexual exploitation and abuse on the internet. The law will require websites and services to scan and somehow remove all content that could be harmful to kids before it appears online.  This could

The post Why are AI software makers lobbying for kids’ online safety laws? appeared first on Coda Story.

]]>
THINK OF THE CHILDREN

Last week, the U.K. passed the Online Safety Bill, a law that’s meant to help snuff out child sexual exploitation and abuse on the internet. The law will require websites and services to scan and somehow remove all content that could be harmful to kids before it appears online. 

This could fundamentally change the rules of the game not only for big social media sites but also for any platform that offers messaging services. A provision within the law requires companies to develop technology that enables them to scan encrypted messages, thus effectively banning end-to-end encryption. There is powerful backing for similar laws to be passed in both the U.S. and the European Union.

Scouring the web in an effort to protect children from the worst kinds of abuse sounds like a noble endeavor. But practically speaking, this means the state would be surveilling literally everything we write or post, whether on a public forum or in a private message. If you don’t already have a snoopy government on your hands, a law like this could put you just one election away from a true mass surveillance regime of unprecedented scale. Surely, there are other ways to keep kids safe that won’t be quite so detrimental to democracy.

As a parent of two tiny children, I feel a little twinge when I criticize these kinds of laws. Maybe the internet really is rife with content that is harmful to children. Maybe we should be making these tradeoffs after all. But is kids’ safety really what’s driving the incredibly powerful lobbying groups that somehow have a seat at every table that matters on this issue, from London to D.C. to Brussels?

It is not. This week, Balkan Insight dropped a deeply reported follow-the-money investigation of the network of lobbying groups pushing for this kind of “safety” legislation in Europe that made a connection that really ought to be on everyone’s radar: The AI industry is a major lobbying force driving these laws.

The piece takes a hard look at Thorn, a U.S. organization that has been a vocal advocate for children’s online safety but that has also developed proprietary AI software that scans for child abuse images. Thorn seems to be advocating for companies to scan every drop of data that passes through their servers with one hand and then offering the perfect technical tool for said scanning with the other. It’s quite the scheme. And it seems to be working so far — the U.K. law is a done deal, and talks are moving fast in the right direction for Thorn in Europe and the U.S. Oh, and the U.S. Department of Homeland Security is already among Thorn’s current clients.

As a number of sources quoted in the Balkan Insight investigation point out, these laws might not even be the best way to tackle child exploitation online. They will require tech companies to break encryption across the internet, leaving people vulnerable to all kinds of abuse, child exploitation included. This level of surveillance will probably send the worst predators into deeper, darker corners of the web, making them even harder to track down. And trying to scan everything is often not the best way to trace the activities of criminal groups. 

I’m sure that some of the people pushing for these laws care deeply about protecting kids and believe that they are doing the best possible thing to make them safer. But plenty of them are driven by profit. That is something to worry about.

GLOBAL NEWS

The internet was barely accessible last week in the disputed territory of Nagorno-Karabakh, where Azerbaijani military troops have effectively claimed control of the predominantly ethnic Armenian region. Tens of thousands of Karabakhi Armenians are fleeing the mountainous region that abuts the Azerbaijani-Armenian border in what one European MEP described as a “continuation of the Armenian genocide.” The role of Russia in the conflict (which amid the war in Ukraine seems to have withdrawn its long-time support for the Armenian side) and the importance of Azerbaijan to Europe as a major oil producer have dominated most of the international coverage. But the situation for people in the region is dire and has largely been ignored. The lack of basic digital connectivity isn’t helping — researchers at NetBlocks showed last Thursday that Karabakh Telecom had almost no connectivity from September 19, when the full military offensive launched, until September 21, when Armenian separatist fighters surrendered. TikTok was also blocked during this period. 

Azerbaijani authorities are also taking measures to ensure that their critics keep quiet online. Several Azerbaijani activists and journalists who have posted critical messages or coverage of the war on social media have been arrested for posting “prohibited” content.

An internet blackout has also gone back into effect in Manipur, India, just days after services were restored. An internet blackout has been in effect in Manipur since the beginning of May, as nearly 200 people have died in still ongoing ethnic violence. This blackout was finally lifted last weekend. But protests in Imphal, the capital city of the northeastern state that borders Myanmar, erupted this week after photos of the slain bodies of two students who had gone missing in July surfaced and went viral on social media. Now the Manipur government, which has largely failed to contain the violence, even as its critics accuse it of fomenting clashes, has said disinformation, rumors and calls for violence are being spread online, necessitating another shutdown. An order from the state governor’s office, which has been circulating on X, says the shutdown will last for another five days. Indian authorities frequently shut down the internet in embattled states, despite the cost to the economy — an estimated $1.9 billion in the first half of this year alone — and the apparent lack of effect on public safety.

Speaking of shutdowns, there’s new hope that Amazon might have to shutter some part of its business or at least clean up its practices. This week, the U.S. Federal Trade Commission, alongside 17 state attorneys general, filed a massive lawsuit accusing the e-commerce behemoth of inflating prices, degrading product quality, and stifling innovation. These practices hurt both consumers and third-party sellers, says the FTC, who have little choice but to sell their goods on Amazon’s platform. This is a bread-and-butter anti-monopoly case — it doesn’t rely on the pioneering legal theories the FTC Chair Lina Khan is known for. In legal scholar and former White House tech advisor Tim Wu’s view, “The FTC complaint against Amazon shows how much, over the last 15 years, Silicon Valley has understood and used scale as a weapon. In other words, the new economy relied on the oldest strategy in the playbook — deny scale to opponents.” I couldn’t agree more.

The post Why are AI software makers lobbying for kids’ online safety laws? appeared first on Coda Story.

]]>
Tech is still critical for Iran’s protest movement — and its regime https://www.codastory.com/newsletters/iran-protests-anniversary-censorship-surveillance/ Thu, 21 Sep 2023 14:08:46 +0000 https://www.codastory.com/?p=46580 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: The U.K. passes a not-so-safe online safety law, Netanyahu and Musk talk AI safety and antisemitism.

The post Tech is still critical for Iran’s protest movement — and its regime appeared first on Coda Story.

]]>
It has been just over a year since protests erupted across Iran, after the 22-year-old Mahsa Amini was arrested by the morality police for allegedly breaching the country’s hijab law and died in police custody a few days later.

Iran has not seen uprisings of this magnitude since the Iranian Revolution of 1979: They have dwarfed the Green Movement protests of 2009, and they extend far beyond calls for an end to the mandatory hijab. Demonstrators — who range from university students to doctors to labor unions — have demanded economic reforms and the codification of women’s rights and called for “death to the dictator.” They have been met with a sharp, brutal response from Iranian authorities. Tens of thousands have been arrested and jailed, and more than 500 people have been killed in clashes with the security forces. Seven men have been executed by hanging for their involvement with the protests. And while large-scale demonstrations have mostly tapered off, acts of resistance continue.

Technology has played a role at many turns in what has happened over the past year. Social media blackouts and internet shutdowns have become a hallmark of the regime’s response to the protests: Research groups like OONI and NetBlocks have documented the blackouts, while tools like VPNs and Starlink have helped people work around them. The Google Play store, where 90% of Iranians would normally download apps, has been blocked since the protests began, to no avail.

And as with every major protest movement of the past decade, social media has been critical to the strategies of both the protesters and the regime they oppose. In Iran, where all major U.S.-based platforms are now blocked, Telegram became the go-to platform for protesters — and for the regime too. Several months ago, I spoke with Mahsa Alimardani about the power that Telegram held in this situation. Alimardani, who is a PhD candidate at the Oxford Internet Institute and a senior researcher at Article19, impressed upon me that the Iranian authorities were “thriving” on Telegram, using the platform to identify and shame protesters and even broadcast forced confessions. Coordinated disinformation campaigns are also a preferred tactic of the authorities. In a recent piece for the Atlantic Council, Alimardani described how the regime now routinely “floods” online spaces with messages and accounts that are designed to leave the opposition “distracted, disunited, and chaotic.”

And technical surveillance has been on the rise too. A few months after the protests began, it came to light that authorities were finding women who appeared with their heads uncovered in photos or videos posted online and using facial recognition tools to identify and pursue them for violating the law. Just yesterday, legislators approved a bill — dubbed the “hijab and chastity law” — that will jack up penalties for hijab law violations, require businesses to enforce the law and “create and strengthen AI systems to identify perpetrators.”
This week, you can find manyreflections across the web on what the movement means, one year on. The biggest takeaway for me is that while the Iranian regime hasn’t fundamentally changed, Iranian society unquestionably has — and, at least for the current generation, this change may be irreversible. As Iranian journalist Golnaz Esfandiari put it on NPR, “I don’t think people can go back to the way they were.”

GLOBAL NEWS

Will a new censorship regime really make British kids safer? On Tuesday, U.K. parliamentarians passed the hotly-debated Online Safety Bill that will require big social media platforms — and lots of other websites — to perform age checks for all users and somehow remove all content that could be harmful to kids before it appears online. It’s easy to agree that material promoting violence, suicide and disinformation is bad for kids, but screening for this kind of stuff will be the challenge of the century. Outside of China, where censorship really does come first, there are no major platforms that do this. That will have to change if the big players want to stick around in the U.K., and it will probably cause the platforms to censor lots of serious and legitimate stuff. 

The law could also leave smaller, alternative sites in a lurch. Wikipedia has said that depending on how the law is enforced, it might have to leave the U.K. altogether. On top of all that, it’s still not clear how the law might affect secure messaging apps. In recent months, both WhatsApp and Signal threatened to pull out of the U.K. should the government force them to screen messages for harmful content. Signal President Meredith Whittaker has already said that this option is still on the table.

Israeli lawmakers may soon be using more surveillance technology in public spaces across the country. National Security Minister Itamar Ben-Gvir is promoting a draft law that would allow the police to deploy facial recognition-enabled surveillance cameras in public spaces across Israel to “track the identity and location of suspects in the commission of crimes” and to aid in the “prevention of crimes.” Israeli authorities have used a variety of invasive surveillance tools in their occupation of Palestinian territories for some time. This move would broaden the state’s digital gaze, ensuring that just about everyone living on land controlled by Israel is under some surveillance. The shift gives credence to the notion that when invasive technologies are used to monitor people whose rights are limited or unrecognized in some way — whether they’re Arabs in Israeli-occupied Palestine or Uyghur Muslims in western China — they may soon be deployed and applied to the broader public.

Israel evidently wants to deepen its ties with other parts of the tech industry too. Earlier this week, Israeli Prime Minister Benjamin Netanyahu met with Elon Musk and Open AI co-founder Greg Brockman to discuss “AI safety.” This was quite the eyebrow-raiser, when you think about Musk’s predilection for posting and promoting antisemitic messages on X and his recent threat to sue the Anti-Defamation League for its research on hate speech, which tracks racism, homophobia and antisemitic speech online. None of this stopped Netanyahu from taking the meeting — another eyebrow-raiser — though he did bring up the issue and pressure Musk to do more about it. Don’t hold your breath, Bibi.

The post Tech is still critical for Iran’s protest movement — and its regime appeared first on Coda Story.

]]>
The surveillance industrial complex is thriving at the border https://www.codastory.com/newsletters/surveillance-immigration-us-uk/ Thu, 14 Sep 2023 14:11:18 +0000 https://www.codastory.com/?p=46484 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: A top Russian journalist grapples with a spyware attack, AI is probably going to mess with elections, and the U.S. is finally taking Google to court

The post The surveillance industrial complex is thriving at the border appeared first on Coda Story.

]]>
On Tuesday, the European Court of Human Rights issued a pivotal ruling on mass surveillance that should have implications in the U.K. and beyond. The court found that plaintiffs Claudio Guarnieri and Joshua Wieder, both experts on data protection and surveillance, “reasonably” believed that the GCHQ, the U.K.’s main intelligence agency, had intercepted their data under its bulk data collection regime.

Guarnieri and Wieder originally brought their case to the U.K.’s Investigatory Powers Tribunal in 2016, in what amounted to a test of the system in the wake of the Edward Snowden revelations, which exposed the large-scale spy programs of not only the U.S., but also the U.K., Australia, Canada and New Zealand governments. When the Tribunal refused to hear their case, they took it to Strasbourg. Even though the two plaintiffs aren’t U.K. citizens, the court decided they still had some baseline rights to privacy under the European Convention on Human Rights.

There’s a difference between governments hoovering up data as a routine practice and immigration agencies tracking individuals after they cross a border, but the case should set some precedent concerning the data privacy rights of non-U.K. citizens once they’re in the U.K. What might this mean for migrants coming to the U.K. from across the globe in pursuit of a better life? In a world where everyone depends on internet-based tools to communicate, travel, work and earn money — tools that collect gobs of data about us along the way — the question feels pertinent.

The surveillance industrial complex should be top of mind in the U.S. too, as we learn more about border security and management agencies’ exploitation of digital data to surveil people trying to enter the U.S. In Texas, it came to light in late August that a group of Texas National Guard members — acting within Governor Greg Abbott’s controversial state-run border mission — had carried out an unauthorized spy operation in which they deliberately infiltrated WhatsApp groups used by migrants and smugglers to communicate about their routes. 

I’m not sure which is worse — WhatsApp infiltration or border agencies creating fake social media profiles in order to “research” people who are seeking residency in the U.S. through established legal channels. The latter strategy, by the way, has been deployed not as part of some rogue Texas border operation but under the auspices of the U.S. Department of Homeland Security. Critical details about the program surfaced last week, thanks to a series of open records requests filed and obtained by NYU’s Brennan Center for Justice. 

On a somewhat brighter note, last week, U.S. Customs and Border Patrol publicly vowed to stop buying troves of people’s location information from data broker companies like Venntel by the end of this month. How are third-party companies you’ve probably never heard of getting their paws on your data? Too often, when you sign up for a new digital service and “agree” to its terms and conditions, you have no choice but to authorize the service to sell your data to companies like Venntel, which will analyze and repackage it for sale to the highest bidder. At least soon, if they do as they’ve promised, CBP won’t be one of them.

GLOBAL NEWS

Pegasus, one of the world’s most pernicious surveillance technologies, infected the iPhone of acclaimed Russian journalist Galina Timchenko. On Wednesday, researchers at the University of Toronto’s Citizen Lab and Access Now released technical evidence that Timchenko’s phone was compromised in February 2022. This is big, not only because of Timchenko’s unique position as the co-founder of the leading Russian independent media outlet Meduza, which operates out of Latvia, but because Pegasus, built by Israeli spyware firm NSO Group, has publicly stated that it won’t deploy its products in Russia or the U.S., or against people from these countries, presumably due to pressure from the Israeli government. In Meduza’s coverage of the revelations, Timchenko described feeling both terrified and defiant about the discovery. “Just what were they planning to find? They put me under a magnifying glass, hoping to catch something… Go ahead and watch, you creeps! Feast your eyes,” she said.

Experts have been saying it for a while, and the public is catching up: AI is going to mess with elections. A new Axios-Morning Consult poll shows that half of Americans think AI will help spread disinformation in the lead up to the 2024 general election in the U.S. and that this will affect election outcomes. They’re right to worry, especially since X (formerly known as Twitter) is planning to open the floodgates and reinstate political advertising on its platform. Though it is growing crummier by the day, I think it’s safe to assume that what appears on X will still have a significant impact on what the media decide to cover and what voters believe to be true. And I’m not sure if X is actually shadowbanning the New York Times, but Musk’s attacks on the newspaper, and the fact that their traffic from Twitter has dropped substantially since late July, don’t look good. While it’s one among many reliable sources out there, it’s icky to think that U.S. voters might be less likely to read the New York Times because Elon tweaked the system out of spite.

U.S. v. Google: It’s finally happening. The U.S. Department of Justice will officially see Google in court this week, in the first of three upcoming antitrust cases against the $1.7 trillion tech behemoth and the first such case brought against any major tech company since the government sued Microsoft in 1998. This case will focus on Google’s search engine, which, the DOJ argues, the company has unfairly elevated to monopoly status by brokering deals with mobile phone and browser service providers to set Google as their default search engine. The company commands 90% of the search engine market in the U.S., and 94% of it globally. Google argues that it simply offers the best service in the industry and people use it because they love it. 

Tech Policy Press and Ars Technica have put out helpful “what to watch for” pieces about the trial. But the trouble is, the public won’t be able to watch for much, since Google convinced Judge Amit Mehta to keep the trial closed to the public, on the grounds that the company’s precious “trade secrets” might otherwise be compromised. Early next year, the courts will hear another case that the DOJ is bringing against the company, concerning its hyperdominance of the online advertising market. I’m even more excited for this one.

WHAT WE’RE READING

  • I am crushing hard on 404 Media, a new tech news venture of VICE Motherboard alums like Joe Cox, Jason Koebler and Samantha Cole, who wrote this excellent and hilarious piece about the scourge of AI-generated mushroom foraging books on Amazon. The president of the New York Mycological Society says the books offer imprecise or flat-out wrong advice on what to pick and what to avoid. The TLDR here is that if you eat the wrong mushroom, you will die. So consider the source!
  • On that note, I think my friend Ethan Zuckerman is right to worry about AI getting to train itself. He’s written a piece about it for Prospect.
  • And as usual, I am all for popping the chatbot hype balloon, which Sara Goudarzi is conveniently advocating at the Bulletin of the Atomic Scientists. Give her essay a read.

The post The surveillance industrial complex is thriving at the border appeared first on Coda Story.

]]>
How earthly notions of conquest — and Big Tech power moves — are playing out in the stars https://www.codastory.com/newsletters/cyberlibertarianism-satellite-internet-musk/ Thu, 07 Sep 2023 14:31:44 +0000 https://www.codastory.com/?p=46418 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: New court filings suggest X knew about Saudi infiltration, Google servers could worsen drought in Uruguay.

The post How earthly notions of conquest — and Big Tech power moves — are playing out in the stars appeared first on Coda Story.

]]>
THE NEW UTOPIA

The summer is over and the secret is out about Flannery Associates, the once-mysterious company that has bought thousands of acres of land east of the San Francisco Bay as part of a Silicon Valley billionaire-backed venture to build a “new California city.” The New York Times reported in late August that some of the industry’s biggest names — including Reid Hoffman, Marc Andreessen and Michael Moritz — plan to build a techno utopia in largely rural Solano County and have already spent around $800 million to make it happen. Investors and other sources familiar with the pitch said the new city was billed as a bustling metropolis that would bring thousands of jobs to the area, be “as walkable as Paris” or New York’s West Village and even help solve the Bay Area’s housing crisis.

This kind of magical thinking is nothing new — it has deep roots in northern California, and in some ways it echoes visions of a utopian cyberspace that people like Electronic Frontier Foundation co-founder and Grateful Dead lyricist John Perry Barlow espoused. But the kind of undertaking that Flannery Associates has in mind takes a special kind of hubris and a ton of money. The hubris and the money are not new for people of this ilk. But the effects of their actions are becoming bigger and more consequential for the rest of us.

Indeed, a new kind of utopia seems to be emerging, whether in Solano County, California or in Saudi Arabia’s Neom, which my colleague Oliver Bullough described earlier this week as “a blandly-named but horrific new city that Saudi Crown Prince Mohammed bin Salman has decided to build in the desert because he can.”

But why limit yourself to earthly endeavors? This must have been the question on Elon Musk’s mind when he built SpaceX. In space, there is literally no one to answer to. Musk, the ultimate techno utopian conquistador of our times, can do almost anything he wants. And he has.

Starlink, the satellite internet service offered by Musk’s SpaceX has shot roughly 4,000 satellites into low Earth orbit, far outnumbering all the other satellites by impressive margins, and he plans to launch plenty more in the coming years — up to 42,000. Meanwhile, China plans to create its own satellite internet service with a “constellation” of nearly 13,000 small satellites that will have to find a way to share the orbit with Musk’s Starlink battalion.

How humans are engaging with space, specifically in low Earth orbit — a place where, unlike in cyberspace, there is no real jurisdiction or system of governance — is a compelling question for anyone interested in forging new social systems or societies.

Science writer Sarah Scoles brought us a sharp new feature this summer wrestling with some of the hard realities of the new space race. Who can send satellites into space? What do we do when two satellites get in each other’s way? How do we handle the rapidly accumulating debris from satellite crashes of the past? When you think about all the things that satellites provide for us on Earth — from internet access, to GPS technology, to communication networks for conflict zones — it’s not so hard to see why we should care about what happens up there. Sarah’s piece gives us a glimpse into the potentially catastrophic future that may unfold in space if governments and companies don’t figure out how to answer these questions, fast. It also makes a great companion to some of the summer’s deep dives on Musk’s power in the stars, from the New York Times and The New Yorker.

I see the evolution of the internet as a cautionary tale for the new space race. In the early 2000s, the civil libertarian spirit that defined the early internet and inspired communities like Barlow’s largely gave way to a culture and legal ethos firmly tied to the tenets of free market capitalism and an expectation of lax or no regulation. Fast forward to last year’s Twitter takeover, and we the internet users find ourselves at the mercy of people like Musk. Since he snapped up one of the world’s most powerful platforms for free speech and information-sharing, Musk has essentially dismantled it, because he can. Chew on that the next time you gaze up at the stars.

GLOBAL NEWS

Big Tech is literally in space, and virtually in the cloud, but these companies also have a huge footprint on Earth — the quantities of data that Google and Microsoft wield require massive data centers that generate a lot of heat on the ground. How do we cool them down? Water is an effective solution, of course, but it too is an exhaustible resource. In southern Uruguay, Google has plans to build a data center that would require an estimated two million gallons of tap water a day to keep its servers cool. Last month, Uruguayans facing the country’s worst drought in 74 years took to the streets of Montevideo to voice their anger and frustration over the water shortage and the impending Google contract. The company and Uruguayan officials say they’re looking for ways to reduce the burden on the country’s water supply, but the bigger issue isn’t going away. And Uruguay is just one among dozens of countries experiencing the environmental effects of Big Tech.

Another one is Saudi Arabia, where both Google and Microsoft have set up data centers over the past two years: I wrote about this in more depth in June. But this isn’t the biggest story out of the kingdom this week.

X is facing new allegations that it looked the other way when the Saudi government infiltrated the company to spy on its critics back in 2015. A new court filing in the 2019 bombshell case against X purportedly includes evidence that the company, then known as Twitter and under Jack Dorsey’s leadership, either knew or willfully ignored the fact that two Saudi Arabian employees were working on behalf of the Saudi government to gather up the data of an estimated 6,000  users who criticized the regime. Washington Post columnist Jamal Khashoggi, who was murdered and dismembered at the Saudi consulate in Istanbul, was one of them. Lest anyone think that those grisly days are somehow behind Crown Prince Mohammed bin Salman, a Saudi court sentenced a retired schoolteacher to death for his activities on X and YouTube just last week. Muhammad Al-Ghamdi is the brother of a Saudi scholar who lives in the U.K. and runs Sanad, a group that advocates against human rights abuses in Saudi Arabia and the Gulf.

The post How earthly notions of conquest — and Big Tech power moves — are playing out in the stars appeared first on Coda Story.

]]>
Apple caves to the Kremlin, for a minute https://www.codastory.com/newsletters/russia-censorship-apple/ Thu, 10 Aug 2023 17:26:36 +0000 https://www.codastory.com/?p=45707 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: India approves a privacy law that could enable ‘overbroad surveillance’ and WorldCoin is under fire in Kenya.

The post Apple caves to the Kremlin, for a minute appeared first on Coda Story.

]]>
When Russia launched its full-scale invasion of Ukraine last year, Silicon Valley companies responded with uncharacteristic speed and conviction. Meta, Google and Twitter (back when it was still Twitter and still had some principles) put out statements declaring their support for Ukraine and their intentions to go after Russian state propaganda on their platforms. Even Apple — with its sleek products that seem to always know what we need before we need it and its notoriously tight-lipped overlords — took a stand, suspending iPhone sales and Apple Pay.

But Apple has kept its services available for iPhone and Mac users inside the country, and the App Store — iPhone users’ window to much of what the internet has to offer — has become a place where Russians can find some pockets of information not tailored by the Kremlin. In April 2022, the embattled “Smart Vote” app, run by jailed opposition leader Alexei Navalny, even reappeared in the App Store after being blocked for months by both Apple and the Google Play store.

But last week, Apple’s commitment seemed to falter when a popular news and commentary podcast suddenly disappeared from the company’s podcast app for users across the globe. What happened? That’s the question on many people’s minds, and it’s also the name of the censored podcast from Meduza, one of Russia’s leading independent news sources, which now operates in exile like most of its counterparts.

“Apple deletes arguably Russia’s best podcast because Russian officials have asked for it,” said Anton Barbashin, a political analyst and the editor of Riddle Russia. “I’ll remind everyone that Meduza has the best reach into Russian audience,” he wrote on X. “It only makes perfect sense for Russian state to shut them down. But WTF, @Apple?”

I might ask WTF too, except that this kind of move, accompanied by zero explanation from the company, is all too familiar. For example, in Hong Kong, Apple has repeatedly bowed to political pressure to remove apps or pause services, with no acknowledgement or justification. 

Previous app removals have always been confined to one place: Hong Kong’s protest mapping app was only restricted in Hong Kong. But this time is different because Meduza’s podcast was blocked worldwide. Marielle Wijermars, a cybersecurity expert and an associate professor at Maastricht University in the Netherlands, thinks this is concerning. 

“The podcast may have violated Russia’s repressive laws, which Apple could argue warrants local removal to comply with, for example, a court decision,” she told me in an email. “But their contents are not illegal outside of Russia. A global removal thus applies Russian law beyond its jurisdiction and restricts speech in other countries where that speech is actually legal.”

She’s right. And sure enough, within a few days, and with no explanation, the podcast reappeared in the App Store for users everywhere. That’s great news for now.

But the flip-flop shows that the company is not impervious to pressures from the Kremlin. And the openings in Russia for accessing real information are fewer and farther between every time I look. It’s worth noting that when Big Tech took a big stand against Russia early on in the war, users there suffered the consequences — Meta’s services and Twitter were soon blocked by Russian authorities, along with thousands of news and information websites.

What are Russians to do then? “Just use a VPN,” my tech savvy readers might say. But that’s getting tougher too. Mediazona reported earlier this week that VPNs — especially those that cater to civil society groups and independent IT researchers — are struggling to stay online in the face of new efforts to block their services. The bottom line here is that the Kremlin is making it more difficult every day to access reliable information from inside Russia. It may seem like a faraway problem. But the stakes are high because what Russian citizens believe to be true affects not only their daily lives but their perception of the war in Ukraine, including the war crimes committed in their name. I’d say this is a problem for all of us.

GLOBAL NEWS

VPNs may also be on the chopping block in India, where the parliament just approved a new “privacy” law that could make it much harder for people to use privacy-preserving technologies, depending on how it’s implemented. The law sets new standards for the governance of data moving in and out of the country and will require companies to get express consent from people before collecting their data. Although this might sound similar to Europe’s General Data Protection Regulation, a critical difference is that the law doesn’t require the government to get the same consent, and it removes key due process mechanisms set up to limit government overreach. In this vein, the Internet Freedom Foundation says it could set the stage for “overbroad surveillance” by state authorities.

Kenyan authorities have put the kibosh on WorldCoin, citing privacy and finance concerns. WorldCoin, OpenAI CEO Sam Altman’s cryptocurrency project, has some very lofty ambitions — to bank the unbanked and to snuff out the seemingly eternal challenge of identity verification in the digital age. How do they do this? By capturing your iris data, of course 👀. WorldCoin rolled out a beta version of its system, complete with an iris-scanning “orb” (a chrome ball with a scanner inside of it) in select cities in Africa and Asia in 2021, receiving mixed reviews and deep skepticism from privacy experts.

The beta phase is over now, and WorldCoin has made its official debut. It claims to have raked in more than two million sign-ups around the world, despite facing legal challenges in Germany, France and now Kenya. Last week, Kenya’s interior ministry suspended WorldCoin and launched an investigation into the product, including its security protocols, financials and data protection mechanisms, Reuters reported. Kenya isn’t exactly a beacon for data protection, but it is a leader when it comes to mobile banking and digital money — if Altman thinks he’s bringing something new to East Africans’ mobile phones, he failed to do his homework. Mobile money has been around in Kenya since 2007, for almost as long as mobile phones have. If you’ve got a phone and a working SIM card, you can use the homegrown M-PESA system to send and receive money through a state-backed exchange. The system later expanded to Tanzania, Mozambique and a handful of other countries in Africa.

This isn’t the only thing Altman has to worry about in Kenya. Content moderators working for Sama, a third-party contractor hired by OpenAI in Kenya to clean up its data set, have filed a petition calling on regulators to investigate both companies. Moderators told the Guardian that they’re made to review reams of text and images “depicting graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest” while receiving less than $4 per hour in wages. We’ve got more coming on this story in the fall.

Altman is also facing a lawsuit at home in the U.S., targeting both OpenAI and Microsoft. The companies are being sued over their data-hoovering practices to the tune of $3 million in California. The class action suit alleges that by scraping data from across the internet to create AI products, the companies have violated federal and state laws on privacy and computer fraud. In an instructive thread about the Open AI/Microsoft suit, Oxford professor and digital privacy expert Carissa Veliz wrote that “OpenAI has scraped off data from the internet, including personal data, without paying for it or asking for people’s consent. In short, they’ve stolen that data.”

The post Apple caves to the Kremlin, for a minute appeared first on Coda Story.

]]>
Musk joins the right-wing legal crusade against tech researchers https://www.codastory.com/newsletters/musk-x-hate-speech-lawsuit/ Thu, 03 Aug 2023 14:16:08 +0000 https://www.codastory.com/?p=45658 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Senegal’s internet plunges into darkness, Jordan’s parliament wants to get tough on cybercrime and the FBI investigates its own (probably illegal) use of spyware.

The post Musk joins the right-wing legal crusade against tech researchers appeared first on Coda Story.

]]>
LITIGIOUS ELON AND THE WAR ON RESEARCH

Another social media research organization is being sued this week, this time by the company formerly known as Twitter. On Monday, X filed a lawsuit against the Center for Countering Digital Hate, a nonprofit research and advocacy organization that tracks violent and hateful speech on social media. X claims that the research organization violated its terms of use when it scraped data from the platform, among other allegations. 

Much of the filing focuses on the impact that the Center for Countering Digital Hate’s research has had on advertising and, by extension, on X’s bottom line. The group regularly uses its findings to pressure big brands to stop buying ads on X because showing ads next to tweets filled with racist speech and political disinformation is generally regarded as bad for business. This increasingly popular tactic among tech-focused civil rights advocacy groups in the U.S. has proven powerful and may indeed be one reason that X’s ad revenues have plummeted since Musk took over.

The court filing and the company’s all-but-incomprehensible blogpost about the lawsuit say plenty about how this strategy threatens X’s business model. But the company also argues, as Musk so often does, that X is simply trying to protect people’s rights to free speech and that the researchers want to undermine it. Nevermind that hate speech and threats of violence are routinely deployed as silencing tactics by trolls of many stripes, including Musk himself. The filings also make many mentions of the organization’s focus on trying to reduce online disinformation about topics like Covid vaccines, reproductive healthcare and climate change. X argues that this aspect of the group’s work is driven by ideology, when in reality, it is driven by hard facts. Covid vaccines work, reproductive healthcare is a human right, and climate change is real.

The case against the Center for Countering Digital Hate is all too similar to the spate of legal threats recently brought against members of the Election Integrity Partnership, a research coalition assembled around the 2020 election in the U.S. that included the Stanford Internet Observatory, the German Marshall Fund and the Atlantic Council’s Digital Forensic Research Lab, among others. These research groups were focused on tracking election-related disinformation — including state-run accounts promoting false information about who won the 2020 election — and alerting social media companies. When Twitter was still Twitter and Elon Musk was just a foul-mouthed super user of the site, the company actually did try to reduce demonstrably false information about voting rights and election outcomes. Right-wing politicians and magnates like Musk have long leaned on the argument that this infringes on people’s rights to free speech. But even now, when Musk is at the helm of this rapidly disintegrating but still very influential platform, he can’t seem to get enough. So he’s taking this comparatively tiny research group to court.

Imran Ahmed, who leads the Center for Countering Digital Hate, told the New York Times that Musk’s actions are “a brazen attempt to silence honest criticism and independent research.” They are also undoubtedly taking up the Center’s time and resources that would otherwise be spent doing more research in the public interest. 

GLOBAL NEWS

Senegalese authorities ordered a nationwide mobile internet shutdown on Monday after officials apprehended and jailed opposition leader Ousmane Sonko and the country’s Interior Ministry moved to dissolve the PASTEF party, which Sonko leads. This latest chapter in the long-running conflict between Sonko and Senegalese President Macky Sall has seen large pro-PASTEF rallies and heavy-handed state responses, including internet restrictions. In this case, officials indicated that the shutdown was ordered “due to the dissemination of hateful and subversive messages in a context of disturbance of public order.” A similar shutdown was imposed last June and turned into a curfew-style system, with people allowed to use the internet during the day but kicked offline in the evening hours.

Don’t like the police? Don’t say so in Jordan, where the parliament is mulling over a draft cybercrime law that covers everything from “content that provokes strife” to regulations reining in Big Tech. The law would make it a crime to post any material online that “undermines national unity, incites or justifies violence or hatred, or disrespects religions,” and it includes special provisions criminalizing speech related to law enforcement officials.

The law would require social media companies with more than 100,000 subscribers (read: Meta) in Jordan to establish offices in the country. Embarking on the well-trodden path of heavyweight countries like India, some — but not all — Jordanian MPs appear eager to require more cooperation between Big Tech and the government. These policies typically force companies into much stricter compliance with local law, lest they put their business or even their own employees at risk. And it can spell trouble for people who use social media to hold the government to account or document police abuse. Since Jordan’s draft law also makes it illegal to publish any material about law enforcement officials “that may offend or harm” the institution, well, we can guess what might happen next. Alongside the Jordan Open Source Association and SMEX, global groups like Access Now, Article 19 and the Electronic Frontier Foundation are publicly opposing the law.

There’s new evidence that the U.S. government has been using spyware built by Israel’s NSO Group, despite the fact that the company was officially blacklisted by the White House in March. Documents reviewed by the New York Times in April showed that U.S. government agents, operating behind a front company called Riva Networks, were using a geolocation tool built by the Israeli surveillance tech giant that would allow agents to track anyone through their mobile device, without their knowledge. White House staff, who said they knew nothing of it before the Times’ story ran, put their best guys on it — they asked the FBI to investigate. But this week, it came to light that the NSO contract was held by….the FBI.

The revelations shouldn’t be surprising — NSO first worked its way into U.S. government contracts in 2019. But they sure do cast a shadow over Biden’s ban on commercial spyware.

The post Musk joins the right-wing legal crusade against tech researchers appeared first on Coda Story.

]]>
The AI apocalypse might begin with a cost-cutting healthcare algorithm https://www.codastory.com/newsletters/cigna-ai-healthcare-algorithm/ Thu, 27 Jul 2023 15:45:52 +0000 https://www.codastory.com/?p=45546 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Google and Meta face new lawsuits over violent content, and Saudi Arabia is playing dirty on Snapchat.

The post The AI apocalypse might begin with a cost-cutting healthcare algorithm appeared first on Coda Story.

]]>
On Monday, patients in California filed a class action lawsuit against Cigna Healthcare, one of the largest health insurance providers in the U.S., for wrongfully denying their claims — and using an algorithm to do it. The algorithm, called PXDX, was automatically denying patients’ claims at an astonishing rate — the technology would spend an estimated 1.2 seconds “reviewing” each claim. During a two-month period in 2022, Cigna denied 300,000 pre-approved claims using this system. Of claim denials that were appealed by Cigna customers, roughly 80% were later overturned.

This is bad for people, but it could also sound wonky, banal or even “small bore” to tech experts. Yet it is precisely the kind of existential threat that we should worry about when we look at the consequences of bringing artificial intelligence into our lives.

You might remember this spring, when the biggest and wealthiest names in the tech world gave us some pretty grave warnings about the future of AI. After a flurry of opinion pieces and full-length speeches, they found a way to boil it all down to a simple “should” statement

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

This sentence and its most prominent signatories (Sam Altman, Bill Gates and Geoffrey Hinton among them) swiftly captured the headlines and our social media feeds. But have no fear, the statement’s authors said. We will work with governments to ensure that AI regulations can prevent all this from happening. We will protect you from the worst possible consequences of the technology that we are building and profiting from. Oh really?

OpenAI CEO Sam Altman then jetted off on a global charm tour, on which he seems to have won the trust of heads of state and regulators from Japan to the UAE to Europe. A week after he visited the EU, the highly anticipated AI Act had been watered down to suit his company’s best interests. Mission accomplished.

Before the tech bros began this particular round of spreading doom and gloom about blockbuster-worthy, humanity-destroying AI, journalists at ProPublica had published an investigation into a much more clear and present threat: Cigna’s PXDX algorithm, the very subject of the aforementioned lawsuit. 

In its official response to ProPublica’s findings, Cigna had noted that the algorithm’s reviews of patients’ claims “occur after the service has been provided to the patient and do not result in any denials of care.” 

But hang on a second. This is the U.S., where medical bills can bankrupt people and leave them terrified of seeking out care, even when they desperately need it. I hear about this all the time from my husband, who is a physician and routinely treats incredibly sick patients whose conditions have gone untreated for years, even decades, often due to their being uninsured or underinsured. 

This is not the robot apocalypse or nuclear annihilation that the Big Tech bros are pontificating about. This is a slow-moving-but-very-real public health disaster that algorithms are already inflicting on humanity. 

Flashy tools like ChatGPT and LensaAI may get the lion’s share of headlines, but there is big money to be made from much less interesting stuff that serves the banal needs of companies of all kinds. If you read about what tech investors are focused on right now, you will quickly discover that the use of AI in areas like customer service is expected to become a huge moneymaker in the years to come. Again, forget the forecasted human extinction by robots that take over the world. Tech tools that help “streamline” processes for big companies and state agencies are the banal sort of evil that we’re actually up against.

Part of the illusion that seems to drive statements that prophesy human extinction is that technology will start acting alone. But right now, and for the foreseeable future, technology is the result of a multitude of choices made by real people. Right now, tech does not act alone.

I don’t know where we’d be without this kind of journalism or the AI researchers who have been studying these issues for years now. I’ve plugged them before, and now I’ll do it again — if you’re looking for experts on this stuff, start with this list.

And now I’ll plug a new story of ours. Today, we’re publishing a deep dive that shows how a technical tool, even when it’s built by people with really good intentions, can contribute to bad outcomes. Caitlin Thompson has spent months getting to know current and former staff at New Mexico’s child welfare agency and speaking with them about a tool that the agency has been using since 2020. The tool’s intention? To help caseworkers streamline decisions about whether a child should be removed from their home, in cases where allegations of abuse or neglect have arisen. This is a far cry from the ProPublica story, in which Cigna seems to have quite deliberately chosen to deny people’s claims in order to cut costs. This is a story about a state agency trying to improve outcomes for kids while grappling with chronic staffing shortages, and it shows how the adoption of one tool — well-intentioned though it was — has tipped the scales in some cases, with grave effects for the kids involved. Give it a read and let us know what you think.

GLOBAL NEWS

Google and Meta are facing new legal challenges over violent speech on their platforms. The families of nine Black people who were killed in a supermarket in Buffalo, New York in 2022 have filed suit against the two companies, arguing that their technologies helped shape the ideas and actions of Payton Gendron, the self-described white supremacist who murdered their loved ones. The U.S. Supreme Court has already heard and decided to punt on two cases with very similar characteristics, reasoning that the companies are shielded from liability for speech posted by their users under Section 230 of the Communications Decency Act. So the new filings may not have legs. But they do reflect an increasingly widespread feeling that these platforms are changing the way people think and act and that, sometimes, this can be deadly.

The Saudi regime is using Snapchat to promote its political agenda — and to intimidate its critics. This should come as no surprise: An estimated 90% of Saudis in their teens and 20s use the app, so it has become a central platform for Saudi Crown Prince Mohammed “MBS” bin Salman to burnish his image and talk up his economic initiatives. But people who have criticized the regime on Snapchat are paying a high price. Earlier this month, the Guardian reported allegations that the influencer Mansour al-Raqiba was sentenced to 27 years in prison after he criticized MBS’ “Vision 2030” economic plan. Snapchat didn’t offer much in the way of a response, but Gulf-based media have reported​ on the company’s “special collaboration” with the Saudi culture ministry. It’s also worth noting that Saudi Prince Al Waleed bin Talal — who is Twitter, er, X’s, biggest shareholder after Elon Musk — is a major investor in the company.

WHAT WE’RE READING

  • Writing for WIRED, Hossein Derakshan, the blogger who was famously imprisoned in Iran from 2009 until 2015, reflects on his time in solitary confinement and what it taught him about the effects of technology on humanity.
  • Justin Hendrix of Tech Policy Press has written a new essay on the “cage match” between Elon Musk and Mark Zuckerberg, the “age of Silicon Valley bullshit” and the overall grim future of Big Tech in the U.S. Read both pieces, and then take a walk outside.

The post The AI apocalypse might begin with a cost-cutting healthcare algorithm appeared first on Coda Story.

]]>
Meta’s business model is crumbling in Europe https://www.codastory.com/newsletters/eu-meta-surveillance/ Thu, 20 Jul 2023 15:58:10 +0000 https://www.codastory.com/?p=45401 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us

The post Meta’s business model is crumbling in Europe appeared first on Coda Story.

]]>
It has been 15 days since the launch of Threads and the media razzle dazzle about it has become impossible to avoid. Meta’s new Twitter-like service, which various outlets have casually termed  a “Twitter killer,” racked up more than 30 million users by the end of its July 5 launch date, and topped 100 million by July 10. Whew! Zuck must be getting champagne flute emojis in record numbers.

But hang on – let’s pause for a quick calendar check. Threads debuted on July 5. Sure, it was just after a holiday weekend in the U.S. But it also came on the heels of something much more consequential for the company. On July 4, the European Court of Justice issued a historic ruling that undermines Meta’s legal justification for serving targeted ads in the EU and empowers competition regulators to go after the company on data protection grounds. Legalese aside, this ruling could effectively blow up Meta’s business model in the region, and has put the company’s operations there into serious jeopardy. I recommend Jason Kint’s breakdown if you want the legal nitty gritty. But the bottom line is that Meta is in big trouble for collecting tons of user data without sufficient consent and then mixing and matching it across its services. This is the engine driving the creepily precise ad targeting jiu-jitsu for which Meta is so well known and upon which its business model is based. 

If Zuck is emoji-toasting Threads’ success with one hand, chances are good that with his other hand he is feverishly texting the heaviest of his legal and political heavyweight pals like never before.

The effects of the ruling are already playing out. Norway’s data protection authority stepped up last Friday and temporarily banned Meta from engaging in “highly opaque and intrusive monitoring and profiling operations” (AKA serving targeted ads). Beginning on August 4, Facebook and Instagram will be allowed to serve ads to users, but those ads may only draw on information that appears in the “about” section of users’ public profiles.

If Meta fails to comply with Norway’s ban, it will be stuck with a daily fine just shy of $100,000. While that alone won’t make much of a dent in Meta’s bottom line, it’s part of a bigger picture that almost certainly will. Other EU states are bound to follow suit. Plus there’s the 390 million euro fine that Meta is already facing in Ireland, stemming from the strategic litigation efforts of Austrian privacy lawyer Max Schrems. Mix this all together with the obvious anxiety that it will cause among investors, and you have yourself a serious problem.

If you think Zuckerberg isn’t nervous about it, look at Threads just one more time. Unless you’re in Europe, in which case, you can’t, because Threads hasn’t launched there yet. Instagram head Adam Mosseri says this has to do with the company’s efforts to comply with the EU’s Digital Services Act. But the timing is conspicuous. They may actually be running scared.

Researchers in Switzerland recently rolled out an AI model that they say can detect homosexuality by scanning a person’s brain. AI ethics and LGBTQ rights experts say the model is dangerous. Coda’s Isobel Cockerell has the story.

GLOBAL NEWS

The launch of Threads is also a reminder of the risks that Meta continues to take in monetizing public conversations. Computer and social science researchers have shown again and again that the data collection practices that lie at the heart of what I will now go ahead and call Meta’s surveillance advertising business model are a big part of what makes this a dangerous proposition. The model’s success depends on attention – views, clicks, likes, shares and comments – and any quasi-public space that does this is bound to run into trouble when it comes to political speech, elections, conflict zones, and any other situation where harmful speech or disinformation can go viral and really hurt people. Hate speech in a vacuum is hateful and might hurt the handful of people who see it. But hate speech amplified by an algorithm can reach and inspire millions. We need only look at Myanmar, Ethiopia, or most recently Sudan to understand that it can be deadly.

Meta seems hopeful that we’ll somehow forget this and embrace Threads as a “positive and creative space to express your ideas.” Mosseri has actually said he hopes to keep “politics and hard news” off of Threads. In an intro post, he wrote: “my take is, from a platform’s perspective, any incremental engagement or revenue they might drive is not at all worth the scrutiny, negativity (let’s be honest), or integrity risks that come along with them.” It must be a real drag having to answer to the public and investors about the fact that Meta’s platforms regularly amplify threats of violence and hate speech that has led to real-life harm. The New Republic’s Molly Taft mock paraphrased Mosseri with this zinger of a tweet: “head of instagram says: pls don’t be a downer on threads, we don’t want to have to spend money on fact checking, also no one bring up Myanmar thx.” Buuurn.

And Meta is feeling the heat elsewhere too. Uganda’s parliament slapped a 5% levy on Meta alongside several other Big Techs last week, as part of a new tax law that will apply to “non-resident providers of digital services” like Meta, Twitter, Amazon, Netflix and Uber. The law’s proponents say it will boost public revenue and that it’s overdue from foreign companies that turn big profits in the country. State Finance Minister Henry Musasizi put it this way: “We are looking at the money obtained by the supplier of these services. The money for Uber goes to California; the man makes money but doesn’t pay taxes.” Fair enough. 

And Uganda is not alone in this pursuit – Nigeria and Kenya have both introduced similar tax laws targeting digital services in recent years. But can policymakers find a way to keep the tax from being passed on the consumers? Social media and taxation haven’t mixed well in the past. Uganda’s 2018 “social media tax,” that President Museveni (in power then, in power now) defended as a way to curb “gossip” on social media, forced Ugandans to cough up a daily levy before using social media mobile apps. The law sparked widespread protests, and in addition to violating net neutrality principles, it left poor Ugandans facing a 10% increase in the cost of getting online. A year after it passed, internet use among Ugandans had fallen by 30%.

Lithuania is trying to bring Big Tech to heel too, on the issue of political bots. This week Coda reporter Amanda Coakley and I teamed up on a story about proposed amendments that would criminalize “disinformation, war propaganda, [content] inciting war or calling for the violation of the sovereignty of the Republic of Lithuania by force” from “automatically controlled” accounts. Bigger countries have seen some success using a similar approach, but I wonder if Meta and the rest will be willing to comply in Lithuania, home to just 2.8 million people.

WHAT WE’RE READING

  • While European governments are doing plenty of talking to U.S. tech giants, U.S. government officials are currently unsure if they can talk to those same giants themselves. This is because a federal judge in Louisiana recently ruled that state attempts to coerce or otherwise influence companies’ actions is a violation of the First Amendment. Ruh roh! I highly recommend this breakdown of the decision by CJR’s Matthew Ingram.
  • Could AI be used to automatically interpret and enforce the law? I’m sorry to say that two very smart people who I know and trust think that it could. Bruce Schneier and Jon Penney set the dystopian scene and then back it up with proper intel and research in this zinger for Slate’s Future Tense vertical.

The post Meta’s business model is crumbling in Europe appeared first on Coda Story.

]]>
Philippine leader Bongbong Marcos’ digital disinformation regime https://www.codastory.com/newsletters/philippines-marcos-digital-disinformation/ Thu, 13 Jul 2023 19:32:37 +0000 https://www.codastory.com/?p=45247 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Russia tests its ability to cut itself off from the internet and data belonging to millions of Bangladeshis is floating around the web.

The post Philippine leader Bongbong Marcos’ digital disinformation regime appeared first on Coda Story.

]]>
Ferdinand “Bongbong” Marcos Jr., the son of the Philippines’ late and not-so-great former dictator, has been president for just over a year. Last month, Marcos announced a national campaign against disinformation and made some smooth-sounding statements about the importance of media literacy. Taken out of context, this could sound reassuring. But it is pretty rich coming from someone who has dedicated extensive resources to using social media platforms like Facebook and political consultancies like Cambridge Analytica, according to a whistleblower, to help rebrand the image of his family and in particular of his father. 

Ferdinand Marcos Sr. is notorious for his flagrant abuses of human rights, most of which occurred when he put the Philippines under martial law for nearly a decade, beginning in 1972. Since 2019, our friends at Rappler have documented how propaganda spread by Bongbong Marcos supporters and campaign workers across Facebook, Twitter and YouTube helped pave the path to electoral victory in 2022.

But Marcos hasn’t just flooded the zone with disinformation about his record and historical revisionism about his family. He has also continued to pursue threats against websites belonging to small media and civil society organizations. In one case, originally brought by his predecessor Rodrigo Duterte’s administration, officials ordered the National Telecommunications Commission to block independent media sites such as Bulatlat and Pinoy Weekly, alongside a smattering of civil society groups’ websites. The order cited the country’s Anti-Terrorism Act, insinuating that the sites were somehow undermining national security.

My old colleague Mong Palatino, an editor at Global Voices who has worked with Bulatlat, got me up to speed on how affected websites have been responding. The case has them working overtime on their defense, but they are undeterred, he said.

“If the intent is to silence these groups, then the authorities have failed,” Palatino told me. Just as the major social media platforms have helped the Marcos family recast their image, they have also given smaller, more critical media and civil society groups an alternative platform that’s not so easy for the government to snuff out.

“The blocking of websites didn’t prevent these groups from reaching the public through other platforms and social media,” Palatino said. “But we continue to challenge the order and call for its withdrawal,” he added. “It could set a dangerous precedent if authorities continue the systematic crackdown against dissenting voices.”

GLOBAL NEWS

Is Russia thinking about cutting itself off from the global internet? Network observers wish they knew. On July 5, a hiccup in international web traffic moving in and out of Russia came during a test that industry sources told RBC was intended to “check whether the Runet really continues to work” if the country cuts itself off from the global internet. The test seems to have caused brief lapses in connectivity for local sites, as well as major platforms, including Google and Wikipedia. As I wrote in my last edition, the war in Ukraine has made it a whole lot harder for people in Russia to find reliable information about what’s happening on the frontlines and in the halls of power. Although Russia has run tests of its “sovereign internet” in the past, yielding similarly glitchy results, the timing of this one has understandably put Runet watchers on edge. 

“Russian internet experts tell me they doubt Roskomnadzor’s claim that the Sovereign Internet testing was ‘successful’ as the documented internet outages were scattered & not wide scale,” wrote Access Now’s Natalia Krapiva on Twitter. “Sounds like Putin won’t be able to isolate people any time soon, but he’ll keep trying.”

Further complicating the Russian information landscape is Twitter’s new pay-to-play verification system, which appears to be boosting disinformation about the war. The BBC has a new investigation showing a series of viral tweets promoting demonstrably false reports about the war, ranging from a completely fabricated story about “baby factories” in Ukraine that are selling children into sexual slavery to a heavily twisted spin on the future of elections in Ukraine. They’ve all come from accounts with the blue checkmark that now confirms absolutely nothing about its bearer, apart from the fact that they can afford the $8 monthly fee. Nice work, Elon.

Millions of Bangladeshi citizens’ data was leaked and left online recently, only to be discovered by a security researcher who stumbled upon the leak while running a routine Google search. The data, which includes people’s full names, phone numbers, email addresses and national ID numbers, was in the possession of a government agency that the researcher opted not to name, in the interest of protecting the privacy of the millions who could be affected by the leak. Similar to neighboring India, while Bangladesh’s efforts to digitize its national ID system may be intended to help streamline things like the delivery of social services, they have brought serious unintended consequences when it comes to data privacy.

If you’re sitting somewhere in the U.S. and saying to yourself, “phew, nothing for me to worry about this week,” hold that thought. A Nebraska woman pleaded guilty last week to helping her teenage daughter get abortion pills and has been sentenced to two years in jail. The story is painful to read and think about, and most of what it deals with lies beyond the scope of this newsletter. But what drew me to the item was the fact that Facebook messages, sent between the woman and her daughter, were a key piece of evidence used against her. If Facebook were to offer end-to-end encryption on its Messenger service by default, the company would have had nothing to hand over when the prosecutors came knocking. But it doesn’t. As state-level crusades against people seeking abortions continue to play out in post-Roe America, I have no doubt that we’ll see many more cases like this one. If you haven’t already, I suggest you go download a real end-to-end messaging app, like Signal.

WHAT WE’RE READING (AND LISTENING TO)

  • Two editions ago, I wrote about the hundreds of migrants who died in the Mediterranean sea in early June and pointed to the increasing use of surveillance by state border agencies seeking to keep migrants from entering the EU. Foreign Policy’s Andrew Connelly has a compelling new essay on the same topic that is very much worth a read.
  • The Meta Oversight Board recently recommended that Facebook suspend Cambodian President Hun Sen from the platform for six months — this is especially serious since elections are coming up later in July. Sen responded by closing his account altogether and kicking Meta employees out of the country. Rest of World has a good play-by-play on the ongoing fallout.
  • The Wall Street Journal’s Karen Hao has a great new podcast out of Kenya, where she interviewed people who are struggling with PTSD after working on the frontlines of content moderation for ChatGPT.

The post Philippine leader Bongbong Marcos’ digital disinformation regime appeared first on Coda Story.

]]>
Russia’s digital scramble to control the ‘coup’ narrative https://www.codastory.com/newsletters/russia-coup-internet-shutdown/ Fri, 30 Jun 2023 14:39:53 +0000 https://www.codastory.com/?p=45015 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Twitter hands Polish authorities an LGBTQ activist’s data and Apple (finally) balks at surveillance requirements in the UK’s Online Safety Bill.

The post Russia’s digital scramble to control the ‘coup’ narrative appeared first on Coda Story.

]]>
The infamous information manipulation strategies of the Kremlin were seriously tested over the weekend following Wagner Group mercenaries’ near-descent on Moscow. Russian censors were swift to respond on some fronts. News sites and aggregators became inaccessible online — Google News was blocked by five major internet service providers, including the country’s state-owned telco, Rostelecom. Social media platforms, including the super-popular social media and messaging app Telegram, also faltered, with service shutdowns in Moscow, St. Petersburg and along the route from Rostov-on-Don —  which the Wagner Group swiftly, if briefly, occupied — to Moscow.

Russians looking for real information about why Wagner troops traveled all that way only to turn around, and why Wagner’s leader hightailed it to Minsk, were hard-pressed to find it. It wasn’t surprising. Since Russia’s war on Ukraine began, news outlets that aren’t aligned with the Kremlin have scarcely been able to operate, and the vast majority of independent media and their journalists now work outside the Russian territory. In the days since the not-coup, even more websites have been taken down, and searches on Wagner Group leader Yevgeny Prigozhin’s name were blocked on Yandex, Russia’s leading search engine, and on VK, the country’s answer to Facebook.

While Russia watchers observed Kremlin-aligned media handling the incident somewhat clumsily, Prigozhin seemed to have captured much of the narrative thanks to his Telegram channel, a signature platform that he has leveraged for some time. Prigozhin has long been a savvy propagandist and early player in the global disinformation game. He launched and led Russia’s Internet Research Agency, the troll farm that became notorious for its attempts to malignly influence the 2016 U.S. national elections.

The particulars of this week’s events aside, I find myself thinking about the broader effects of the past 16 months on Russia’s information environment. Sure, Russia was never shy when it came to internet censorship — years of evidence from groups like OONI and Freedom House make this clear. But the invasion of Ukraine, as well as the government’s growing need to control what information people can access, has put Russia in a digital quarantine of its own making. Major social media platforms, including Facebook and Twitter, have been blocked since the start of the war. Hundreds of thousands of websites, many of them reporting and publishing news to high standards, are no longer accessible. And more sites and applications seem to discontinue their services in Russia every day. 

Just yesterday, my colleague Ivan Makridin lamented that Slack had stopped offering its services in the Russian language. This may sound banal. But it adds to the digital and professional isolation of Russians. And it makes it easier for Putin to spread his propaganda.

GLOBAL NEWS

A Polish LGBTQ rights activist says Twitter has handed his data over to Poland’s Ministry of Justice. Bart Staszewski, an influential advocate and filmmaker based in Warsaw, tweeted photos of an order from the U.S. Department of Justice, submitted on behalf of Polish authorities, demanding his data from the Silicon Valley company. Staszewski didn’t disclose how he got a copy of the order but said he believes it is politically motivated.

“@Twitter (now @elonmusk) is giving access to my account based on false accusation [sic] of Polish right-wing government officials. It should be a warning point for all activists and whistleblowers from East Europe,” he tweeted. “You don’t need #Pegasus, or china spy software – you just need to abuse legal tools and ask for international help. Disgusting.”

Another Pride Month slap in the face came from Netflix last week, this time in Kenya, where the U.S.-based streaming giant agreed to stop all programming featuring LGBTQ characters and themes. Netflix Africa made the change as part of an agreement with the Kenyan Film Classification Board, which restricts programming that “glorifies, normalizes, promotes and propagates homosexuality,” in accordance with Kenyan law. Although same-sex relations are already criminalized in Kenya, right-wing legislators are pushing to expand laws that would restrict LGBTQ people’s rights, with the so-called “Family Protection Bill,” which invites unfortunate comparison to neighboring Uganda’s increasingly homophobic regime. 

Transgender people in Tennessee are afraid that their data is no longer safe with their doctors. Vanderbilt University Medical Center handed over medical records for a group of transgender patients to the state attorney general’s office, as part of what state officials say is a billing fraud inquiry. As per hospital policy, Vanderbilt notified the patients and their families, sparking instant concern about the safety of their data in Tennessee, where Governor Bill Lee approved a law in March that bars minors from receiving gender-affirming healthcare.

It’s a good time to start using end-to-end encryption, whether it’s the Polish Ministry of Justice, the Tennessee AG’s office or some other judicial authority you’re worried might come after your data. But that could be tough in the U.K., if the so-called Online Safety Act is to pass. This week, Apple joined the ranks of Signal and WhatsApp when it publicly criticized the draft law’s provisions requiring companies to water down security standards in order to allow authorities to more easily scan people’s communications for child abuse material. As Signal President Meredith Whittaker put it, “Encryption works for all or it’s broken for all. There’s no safe, private way to conduct mass surveillance. & no amount of magical thinking or specious claims will alter this stubborn reality.”

WHAT WE’RE READING (AND LISTENING TO)

  • Jacobin Radio has a fresh new episode featuring AI real-talkers Meredith Whittaker, Edward Ongweso Jr. and Sarah Myers West on the “mundane dystopia concealed beneath the AI hype machine.”
  • Writing for Rest of World, Liani MK has a great new immersive feature on how indigenous groups in Malaysia are using digital mapping tools to assert their land rights.

The post Russia’s digital scramble to control the ‘coup’ narrative appeared first on Coda Story.

]]>
How surveillance tech controls the fate of migrants https://www.codastory.com/newsletters/migration-surveillance-tech/ Thu, 22 Jun 2023 15:14:34 +0000 https://www.codastory.com/?p=44723 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: OpenAI greases regulatory wheels in the EU and digital disinformation researchers face legal threats in the U.S.

The post How surveillance tech controls the fate of migrants appeared first on Coda Story.

]]>
Hundreds of people remain missing and are feared dead following last Wednesday’s shipwreck in the Mediterranean sea. Rescue organizations began receiving distressed calls from people aboard the fishing vessel that was carrying an estimated 750 migrants, mainly from Egypt, Libya and Pakistan, who said they had run out of supplies and that people were dying of thirst. When officials from Greece’s Hellenic Coast Guard approached by boat, the vessel made a turn so abrupt that it capsized and rapidly sank, with hundreds falling into the sea from the outer decks.

What caused the ship to sink, exactly? What would the Hellenic Coast Guard have done had they successfully reached the vessel? And how did these hundreds of people wind up on this ship, only to plunge to their deaths?

Answers are beginning to emerge, thanks to reporting by various outlets including Mada Masr, which has begun investigating the Egyptian smuggling operation that appears to have brought many of the passengers to the ship to begin with.

The story underlines the life-threatening lengths people now go to in order to reach European shores. What it doesn’t tell us, at least not yet, is how technology is playing an increasingly powerful role in determining the fates of people seeking to cross borders. Last month, at Coda, we published an investigation into the International Centre for Migration Policy Development, a Vienna-based non-government agency that has received hundreds of millions of euros in contracts from the EU to supply tools and tactics — including surveillance tech — to non-EU governments in exchange for their cooperation in preventing people from attempting to migrate to Europe.

Technical surveillance has become an inescapable condition of crossing a border in pursuit of a better life. This issue dominated tech news from Europe this past week as the European Parliament finally reached an agreement on critical components of the AI Act, which has been hotly debated for months. The regulation, which is designed to require tech manufacturers to assess and disclose the possible risks that their products might pose for people, covers everything from seemingly innocuous chatbots to AI-powered surveillance technology. But the current text has critical carve-outs that allow immigration enforcement agencies to use facial recognition and other “discriminatory profiling” technologies on migrants and people seeking refugee or asylum status in the bloc. 

And it’s not just the EU. Last week, we published a deep dive by my colleague Erica Hellerstein on the increasingly tech-centric approach to immigration enforcement by U.S. authorities. Technical surveillance, whether through using an ankle monitor or a facial recognition-enabled mobile app, is a constant for the estimated quarter of a million migrants enrolled in the “Alternatives to Detention” program while they await immigration proceedings. While the Biden administration has promoted the program as a more humane, economical and effective way to enforce immigration policy, researchers and advocates have found that the long-term psychological effects for people in the program are severe. One enrollee put it to Erica this way: “If you have an invisible fence around you, are you really free?”

IN GLOBAL NEWS

  • It turns out Sam Altman’s global PR tour was not just for show. Behind the pseudo-diplomatic meet-and-greets, OpenAI’s CEO appears to have been greasing the policymaking wheels in order to cultivate a friendly regulatory environment for his $29 billion company. A new investigation from TIME’s Billy Perrigo revealed this week that in the EU, Altman urged parliamentarians to water down the AI Act so that OpenAI’s tools wouldn’t be placed in the “high risk” category of the draft regulation, which would subject the company to more regulatory scrutiny. His efforts seem to have borne fruit. The language that OpenAI objected to has indeed been removed from the text as it currently stands. Of course it wouldn’t be a proper EU regulation without yet another round of deliberations that could take up to six months, or maybe more, so it’s hard to say exactly what effects all this will all have. But it certainly lays bare Altman’s true agenda.
  • A prominent group of U.S. research organizations are being sued over their efforts to reduce Covid and election-related disinformation on social media during the 2020 general election in the U.S. The GOP-led “legal campaign,” as the New York Times put it, targets members of the Election Integrity Partnership, which included the Stanford Internet Observatory, the German Marshall Fund and the Atlantic Council’s Digital Forensic Research Lab among others. A lawsuit filed in Louisiana by the editor of the far-right news site The Gateway Pundit, which is known for spreading falsehoods, accuses these institutions of “working closely” with government officials to “urge, pressure, and collude with social-media platforms to monitor and censor disfavored speakers and content.”
  • These organizations were studying disinformation on social media and giving major platforms insights and recommendations based on what they were finding. Although they published much of the resulting work, some aspects of the partnership were kept under wraps, generating suspicion about what may or may not have happened behind closed doors. The resounding message from those behind this lawsuit and other legal challenges against the researchers is that the partnership led to widespread censorship of right-wing views on mainstream social media sites. But hang on a second. If anyone got censored, it was the companies — Facebook, Twitter, etc. — who actually had the power to remove speech or accounts, not the researchers. So why are lawmakers and right-wing media targeting these research groups? The Knight First Amendment Institute’s Jameel Jaffer put it simply when he described the campaign as a wildly partisan “attempt to chill research.”

WHAT WE’RE READING

  • The Verge’s Josh Dzeiza has a great new deep dive on the “vast tasker underclass” of people who work to make tools like OpenAI’s ChatGPT seem almost as smart as a real person.
  • Scarly Zhou has a new story for Rest of World that looks at how sexism and misogyny have become a recurrent problem for female users of Glow, a new Chinese AI platform featuring customized, highly real-seeming chatbots.

The post How surveillance tech controls the fate of migrants appeared first on Coda Story.

]]>
Big Tech looks the other way on Saudi Arabia’s human rights abuses https://www.codastory.com/newsletters/saudi-arabia-human-rights-big-tech/ Thu, 15 Jun 2023 15:19:28 +0000 https://www.codastory.com/?p=44572 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Hong Kong officials go after Google and Sam Altman “reaches out” to China.

The post Big Tech looks the other way on Saudi Arabia’s human rights abuses appeared first on Coda Story.

]]>
DOES SAUDI REPRESSION MATTER TO BIG TECH?

A young woman in Saudi Arabia is in jail for using Twitter and Snapchat to advocate for an end to the country’s male guardianship rules and for failing to wear “decent” clothes. It came to light last month that Manahel al-Otaibi, 29, has been in pre-trial detention since November 2022. But as is too often the case with Saudi Arabia, it took months for the story to reach international media, and publicly available details on al-Otaibi’s case remain scarce. And since details from court proceedings are almost never made public in Saudi Arabia, we can only speculate about what evidence will actually be brought against al-Otaibi. But it is conceivable that prosecutors will ask one or both companies to hand over her private data — a standard move in cases like these.

What kind of data might the Saudi government expect companies to hand over? The answer to that question may be up in the air, on account of the fact that some of the biggest U.S.-based tech companies have drastically increased their presence in Saudi Arabia in recent years. Last month, Microsoft announced plans to establish a cloud computing center there, a move that was swiftly condemned by advocates in the Gulf region. And Microsoft is late to the party — in 2018, just a few months after Washington Post columnist Jamal Khashoggi was murdered and dismembered at the Saudi consulate in Istanbul, Google signed a memorandum of understanding with Saudi Aramco, the state oil giant, to build a “cloud region” on Saudi soil. The deal became public in 2020, in a rather vanilla blog post touting the benefits of cloud infrastructure for enterprise. 

When advocacy groups asked what the cloud region would mean for people’s data and privacy in the region, Google responded with a mostly boilerplate letter, in which it offered only this assurance: “An independent human rights assessment was conducted for the Google Cloud Region in Saudi Arabia, and Google took steps to address matters identified as part of that review.”

Did Google really identify human rights-related “matters” that were so uninteresting that it wasn’t worth saying what the heck they found? This is Saudi Arabia after all.

It reminded me of how Saudi officials initially responded when people asked if they had anything to do with Khashoggi’s murder. They said something to the effect of “we didn’t do it. Just trust us.” If it sounds like a leap to link Google’s data centers with Khashoggi’s murder, remember (or learn) that Saudi intelligence did a whole lot of digital spying on Khashoggi and his contacts before his killing. This government has no qualms about engaging in hardcore surveillance and is even willing to infiltrate a major U.S. tech company — like it did at Twitter in 2015 — to get it done.

Google staffers later explained to us that cloud regions offer infrastructure for all kinds of companies (what they refer to as “enterprise”), often in arrangements where Google actually doesn’t have special access to what’s happening on the platform. But does the company still have some responsibility to safeguard data that it hosts, even when that hosting is happening through its “enterprise” services? Maybe so. 

My friend Mohamad Najem, who runs the Beirut NGO SMEX, put it to me this way at RightsCon last week: “The main question to Google is, ‘We know you’re doing this, but do you want your data center to be affiliated with any potential attack that might happen on netizens?’” Najem argues that the company ought to have some way of auditing its systems to protect against indiscriminate surveillance.
“We analyzed the laws for them,” he said with a shrug, referring to SMEX’s research on personal data-related laws in Saudi Arabia. “We showed them that there is no data protection in these countries. In all cases, you’re putting people’s data at risk.”

IN GLOBAL NEWS

Last week, I wrote about how the stuff of Hong Kong’s once-rich intellectual and civic life is disappearing from the internet. More may vanish soon, if local officials beholden to Beijing get their way.

The Hong Kong government wants YouTube to censor 32 videos featuring “Glory to Hong Kong,” a song exalting an independent, free Hong Kong that became the anthem of Hong Kong’s 2019 pro-democracy movement. Last week, the Ministry of Justice petitioned the High Court to issue an injunction that would prohibit “broadcasting, performing, printing, publishing, selling, offering for sale, distributing, disseminating, displaying or reproducing in any way” the song, which is considered an affront to Beijing’s increasing power over the once relatively autonomous city-state. Google, the owner of YouTube, has further angered Hong Kong officials because the song apparently often appears at the top of search engine results when one enters “Hong Kong national anthem.” Officials have called on the company to change the search results and put the actual anthem at the top instead.

Back in 2010, Google set up shop in Hong Kong as a way to maintain a presence in greater China without being subject to Beijing’s censorship demands. But times have changed. Apart from the song debacle, Google is hedging its bets with Bard, its AI chatbot, which it has not made available in Hong Kong so far. OpenAI has also held ChatGPT back in Hong Kong. The Wall Street Journal reported that this may be due to fears that such products could run afoul of Chinese laws that criminalize criticism of the government.

But this hasn’t stopped OpenAI CEO Sam Altman from “reaching out” to China. As part of his ongoing global roadshow, Altman made a virtual appearance at a conference hosted by the Beijing Academy of Artificial Intelligence last weekend where he made vague calls for “global cooperation” on AI and praised China for having “some of the best AI talents in the world.” Altman’s probably right on that last point, but between China’s Great Firewall and the ongoing tech trade war between the U.S. and China, I’m not sure what he imagines when he talks about “cooperation.” In case you’ve missed it, Altman has also managed to squeeze in meetings with Indian Prime Minister Narendra Modi, French President Emmanuel Macron and South Korean President Yoon Suk Yeol.

A cash-assistance algorithm funded by the World Bank and deployed in Jordan has serious flaws ranging from coding errors to automating discrimination on the basis of characteristics like gender, according to a new report from Human Rights Watch. Jordan isn’t the only country where the World Bank is pushing tech on the welfare sector. Seven other countries in the Arab region are using the system, also at the World Bank’s behest. Amos Toh, who wrote the report, told me that the World Bank has similar projects underway in other regions. Development Pathways, a Swedish social policy think tank, has documented high exclusion error rates across 29 poverty-targeting programs beyond the MENA region.

WHAT WE’RE READING

  • New research from the Center for Countering Digital Hate shows that Google is allowing anti-abortion groups to purchase deceptive ads in its search engine that direct users to “crisis pregnancy centers” — clinics that steer people away from abortion — by targeting common search terms like “abortion clinic near me.”
  • The internet shutdowns that followed the arrest of former Pakistani Prime Minister Imran Khan last month have precipitated serious losses for the country’s tech sector. Zuha Siddiqui has the story for Rest of World.
  • My new favorite letter about AI was written by Prabha Kannan for The New Yorker’s Daily Shouts section and makes some terrific points, all in the form of a mock open letter a la Center for AI Safety. “This letter serves as a warning that the human race will definitely be wiped out because of the humanlike A.I. systems that the undersigned are responsible for developing and releasing into the world.” Wink. Strangely enough, The Onion has a piece with a remarkably similar flavor this week. What a world we live in.

The post Big Tech looks the other way on Saudi Arabia’s human rights abuses appeared first on Coda Story.

]]>
In Hong Kong, a digital memorial of the Tiananmen Square massacre disappears https://www.codastory.com/newsletters/hong-kong-tiananmen-square/ Thu, 08 Jun 2023 14:42:17 +0000 https://www.codastory.com/?p=44189 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Senegal shuts down the internet amid violent clashes, Syria uses shutdowns to prevent exam cheating and Sam Altman’s global tour continues.

The post In Hong Kong, a digital memorial of the Tiananmen Square massacre disappears appeared first on Coda Story.

]]>
The 1989 massacre at Beijing’s Tiananmen Square is perhaps the most aggressively censored topic on the Chinese internet. For more than two decades now, the anniversary of the massacre, on June 4, has been commemorated online with photographs from the demonstrations, messages honoring victims and emojis of candles symbolizing a vigil. Chinese authorities have always been swift to snuff these messages out on social media, triggering a cat-and-mouse dynamic in which digitally savvy people find workarounds to evade the ever-alert censors. Instead of referencing June 4, for instance, they use “May 35” or simply “64.” And the infamous “Tank Man” photo has been doctored again and again, sometimes with rubber duckies replacing the military tanks barrelling toward the slight young man standing resolute before them, grocery bag in hand. 

Until recently, it was possible in Hong Kong to talk about the events of that day, to discuss the 1989 democracy movement and to publicly memorialize the dead. But this year, as the New York Times’ Tiffany May put it, “Hong Kong is notable for all the ways it is being made to forget the 1989 massacre.” More than 30 Hong Kongers have been either arrested or detained in recent days for engaging in some kind of public demonstration memorializing the slain students.

This history is now disappearing from Hong Kong’s internet too. Having worked closely with journalists in Hong Kong for a number of years, I knew I wanted to mark the anniversary this week. On Tuesday, I decided to go back and look at Weiboscope, a gripping digital archive of photos, art and messages censored on social media in China for their connection with the 1989 democracy movement. But all I found was a blank page. Weiboscope — a joint project of the University of Hong Kong and the University of Toronto’s Citizen Lab — still has a domain, but the archive itself is gone. All you can see now is an empty site with the words “Nothing Found” and the standard verbiage for a WordPress site with no content. 
This is no accident. The digital records of what people cared about, reported on and knew to be true in Hong Kong have been disappearing from the internet as Beijing has consolidated its power over the city-state. The Weiboscope project fortunately had some redundancy — Citizen Lab hosted some of the material here, and my former team at Global Voices covered the project too. But these sites, too, are blocked in China. And still today, anyone who studies these issues will tell you that most university students in China have never heard about the massacre.

IN GLOBAL NEWS

Access to the internet is being carefully controlled in Senegal, where street demonstrations over a criminal case brought against opposition leader Ousmane Sonko have turned violent in recent days. Sonko was convicted of corrupting a minor and given a two-year prison sentence that could keep him from running for office in the upcoming elections. Protests by his supporters, who believe the case against him was politically motivated, rapidly escalated to violent clashes with the police and have left at least 16 dead. Last week, in an effort to quell the unrest, the Senegalese authorities blocked connections to major social media platforms. By Sunday, mobile internet connections in select areas were being shut down altogether, throughout the afternoon and evening each day. NetBlocks has data confirming what appears to be an internet “curfew” strategy on the part of authorities.

And authorities in Syria shut down the internet to keep students from cheating on exams. This has become a somewhat standard practice in a handful of countries, mostly in the Arab region, where national exams are a deciding factor in whether or not a person attends university. In addition to the obvious problems this creates for businesses and basically everyone who uses the internet, local academics told my friends at the Beirut NGO SMEX that the shutdowns haven’t reduced the number of students who try to skirt the rules. In short, cheaters gonna cheat.

On that note, I’ve been keeping an eye on OpenAI CEO Sam Altman’s global PR tour, which is surely meant to steer global regulatory heavyweights in his company’s favor. It hit peak cringe for me last Thursday, when Altman met with Ursula von der Leyen, the president of the European Commission. Von der Leyen tweeted a photo of herself and Altman, standing in front of an EU flag, stiff diplomatic meeting-style. But in this picture, Von der Leyen looks positively delighted, and Altman looks like he’s trying really hard not to crack up. I’m pretty sure the joke is on her.

WHAT WE’RE READING

  • New Lines has a bombshell story from a group of U.K. researchers who have combed through Meta’s ad library to trace how the U.K. government is running “fear-based campaigns” with ads on Facebook and Instagram targeting migrants from Africa and Asia, telling them not to come to the U.K.
  • It’s great that Meta is letting some researchers into its ad library. For folks in the U.S., this will become an extra rich resource as the 2024 election approaches, especially since Meta (alongside Google) has decided to do away with some of its policies and tactics for reducing election-related disinformation. Axios has a good breakdown of what this might mean for next year.
  • AI-driven weapons should scare everyone. This week, +972 magazine took a close look at the Israeli Defense Forces’ use of AI to sharpen its tactics in Gaza. Read it and remember that this may be in Gaza now, but it will probably reach a city or country near you before too long.

The post In Hong Kong, a digital memorial of the Tiananmen Square massacre disappears appeared first on Coda Story.

]]>
Why tech tycoons are ignoring the clear and present dangers of AI https://www.codastory.com/newsletters/ai-existential-risk/ Fri, 02 Jun 2023 13:12:58 +0000 https://www.codastory.com/?p=43993 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Chinese authorities censor mosque demonstration videos, Vietnam might ban TikTok and AI tycoons keep ignoring the clear and present dangers of AI.

The post Why tech tycoons are ignoring the clear and present dangers of AI appeared first on Coda Story.

]]>
While videos of last weekend’s confrontation between Hui Muslims and police were wiped from Chinese social media sites, they have been making the rounds on the global internet. Authorities in the southwestern Yunnan province had planned to demolish a dome atop the historic Najiaying Mosque in the rural town of Nagu but were blocked by thousands of local residents who formed a protective circle around the mosque. Hundreds of police officers in riot gear surrounded the demonstrators and the standoff went on throughout the weekend. The mosque’s dome was slated for destruction as part of ongoing central government “Sinicization” efforts that are papering over, and in some cases literally destroying, evidence of the influence of other cultures and religions in China, Islam in particular. Domes on mosques are being targeted because of their obvious connection to Arab culture and replaced by architecture intended to appear more traditionally “Chinese” in character. 

An estimated 30 people have since been arrested, and sources speaking about the confrontation with CNN said that the internet had been shut down in select neighborhoods around the town. Editors at China Digital Times collected and reposted videos of the standoff before they were censored on Weibo. The videos offer valuable evidence of the government’s crackdown on certain kinds of religious expression, even as China’s constitution guarantees “freedom of religious belief.”

Vietnam is ratcheting up pressure on TikTok to reduce “toxic” content and respond to its censorship demands, lest the platform be banned altogether. To show they mean business, Vietnam’s Ministry of Information and Communications began an investigation of the company’s approaches to content moderation, algorithmic amplification and user authentication last week. This is especially shaky territory for TikTok. With nearly 50 million users, Vietnam is one of TikTok’s largest markets. And unlike its competitors Meta and Google, TikTok has actually complied with Vietnam’s cybersecurity law and put its offices and servers inside the country. This means that if the local authorities don’t like what they see on the platform, or if they want the company to hand over certain users’ data, they can simply come knocking. 

Pegasus, the world’s best-known surveillance software, was used to spy on at least 13 Armenian public officials, journalists, and civil society workers amid the ongoing conflict between Armenia and Azerbaijan over the disputed territory known as Nagorno-Karabakh. A report on the joint investigation by Access Now, Citizen Lab, Amnesty International, CyberHub-AM and technologist Ruben Muradyan asserts that this is “the first documented evidence of the use of Pegasus spyware in an international war context.” While there’s no smoking gun proving that the software, built by Israel-based NSO Group, was being used to aid one side of the conflict or the other, the location and timing of the deployment certainly suggest as much. 

This should scare everyone. Having this kind of spyware on the loose in war and conflict zones only increases the likelihood of these tools being used to aid and abet human rights violations and war crimes, as the researchers point out. What does NSO have to say about all this? So far, not much. I’ll keep my ears open.

AI TYCOONS CRY WOLF

If you’re worrying about AI causing us all to go extinct, try to calm down. Yet another AI panic statement has been signed by some of the most powerful people in the business, including OpenAI CEO Sam Altman and ex-Google Brain lead Geoffrey Hinton. They offer just a single doom-laden sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

I don’t disagree, but is this apocalyptic scenario what we should be focusing on? What about the problems that AI is already causing for society? Do autonomous war drones not worry these people? Are we okay with automated systems deciding whether your food or housing costs get subsidized? What about facial recognition technologies that, study after study, are proven unable to accurately identify the faces of people with dark skin tones? These are all real systems that are already causing real people existential harm.

Some of the world’s smartest computer scientists are studying and trying to build solutions to these problems. Here’s a great list of them. But their voices are utterly absent from the narrative that these AI tycoons are spinning out.

The people behind this statement are overwhelmingly wealthy, white and living in countries that are not at war, so maybe they just didn’t think of any of the already terrible real world impacts of AI. But I doubt it.

Instead I believe this is some serious strategic whataboutism. University of Washington linguist Emily Bender offered this suggestion:

“When the AI bros scream ‘Look a monster!’ to distract everyone from their practices (data theft, profligate energy usage, scaling of biases, pollution of the information ecosystem), we should make like Scooby-Doo and remove their mask.” Good idea. For next week, I’ll do some follow up research on the statement and whoever is behind the hosting organization — the brand new Center for AI Safety.

WHAT WE’RE READING

My top reading recommendation for this week is this latest edition of Princeton computer scientist Arvind Narayanan’s newsletter, where he and scholars Seth Lazar and Jeremy Howard cut the extinction statement down to size. They write:

“The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power.”

I also highly recommend this piece in WIRED by Gabriel Nicholas and my old colleague Aliya Bhatia, who are doing important research on the challenges of building AI across languages and the harms that emanate from English language-dominance across the global internet.

The post Why tech tycoons are ignoring the clear and present dangers of AI appeared first on Coda Story.

]]>
An AI entrepreneur bets on cryptocurrency to mitigate AI’s dangers https://www.codastory.com/newsletters/worldcoin/ Thu, 25 May 2023 14:53:54 +0000 https://www.codastory.com/?p=43705 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Spain wants to axe encryption and Guinea shuts down social media amid public unrest.

The post An AI entrepreneur bets on cryptocurrency to mitigate AI’s dangers appeared first on Coda Story.

]]>
The EU smacked Meta with the largest fine it has ever issued on data protection grounds on Monday, demanding that the Silicon Valley giant, which brought in $116 billion in revenue in 2022, cough up $1.2 billion for moving EU Facebook users’ data into the U.S., in violation of Europe’s General Data Protection Regulation. The decision follows years of back-and-forth between Meta and EU data protection authorities over the company’s practice of moving user data in and out of territories where its infrastructure sits. ZDnet has a good breakdown of the backstory here.

Georgetown Law professor Anupam Chander pointed out on Twitter that the decision has a whole lot more to do with concerns about overly broad electronic surveillance by the U.S. government than it does with the improper handling of EU users’ data or with Meta specifically. Chander argued that it could pave the way to a global norm of so-called “data localization,” under which companies would be required to store data where they collect it. It might sound simple, but policies like this could really upend things for big companies like Meta and Google, whose business models rely on the ability to send data around the world quickly and to monetize it constantly.

In other EU privacy news, Spain is keen to outlaw end-to-end encryption, according to a bombshell document leak released earlier this week by WIRED. The key document is a survey of EU member states’ opinions on technical measures that might be taken to track down purveyors of child sexual abuse images. Of the 20 countries consulted, 15 expressed at least some interest in limiting end-to-end encryption, with Cyprus and Hungary showing special enthusiasm alongside Spain. If this were to happen, it could have serious implications for apps like Signal and WhatsApp, which allow users to communicate using a technical protocol that prevents anyone — even the company running the app — from reading the content of their messages. A few months back, when similar proposals arose amid deliberations on the U.K.’s Online Safety Act, the heads of both Signal and WhatsApp said they would stop offering service in the U.K. if the state were to try to weaken encryption there.

Signal president Meredith Whittaker elaborated on it in a blog post. “Let me be blunt,” she wrote, “encryption is either broken for everyone, or it works for everyone. There is no way to create a safe backdoor.” True that.

Authorities in Guinea blocked access to major social media sites last week after several days of protests and rioting in the country’s capital. This week, military officials shut down two radio stations and threatened to do the same to any media outlet that “undermines national unity.” The West African country has been governed by its military since the 2021 coup that overthrew President Alpha Conde. Although the ruling junta has attempted to map out a two-year path to reinstating a civilian government, an alliance of opposition political leaders and labor unions has called for demonstrations and a fast-track back to civilian rule.

WE’LL GIVE YOU UNIVERSAL BASIC INCOME — JUST GIVE US YOUR EYEBALLS

Last week, I wrote about OpenAI CEO Sam Altman, who is in the limelight right now because of ChatGPT, his company’s signature product. One critical concern about super smart AI is that it makes the act of online impersonation easier and more convincing than ever. Old-school Twitter bots, for example, are pretty easy to pinpoint if you know what to look for — awkward phrasing, evenly-timed repetition of certain messages, occasional misspellings. But with tools like ChatGPT out in the wild, it gets a lot more complicated. How do we know who’s real when bots start to sound exactly like the real people they claim to be?

Altman has a solution for that, too. It’s called WorldCoin: a cryptocurrency system that claims to offer “a privacy-preserving digital identity designed to help solve important, identity-based challenges, including proving an individual’s unique personhood.” In its own words, WorldCoin also wants to enable “universal access to the global economy by building the world’s largest identity and financial public utility.” Talk about lofty goals. All the hype around ChatGPT seems to have been a boon for WorldCoin, which is on the verge of having raised $100 million.

How do you get in on WorldCoin? There are two routes you can take. One is to get in on the ground floor. Early WorldCoin investors include Silicon Valley venture capital kingmaker Andreessen Horowitz and Sam Bankman-Fried, of FTX dumpster fire fame. The other way is by letting WorldCoin scan your retina and create an “IrisCode” that will allow the system to verify your identity, forever. Since last year, the company has sent its signature chrome “orbs” (futuristic-looking balls with retina scanners inside them) to cities around the world where it has paid people to promote the cryptocurrency and lure in new users by offering them small amounts of cash, literally in exchange for their eyeball data. A big catch is that you actually can’t buy WorldCoin if you’re in the U.S. or EU, due to regulatory restrictions — this, alongside the company’s Cayman Islands HQ, doesn’t inspire confidence. Are we really talking about universal basic income here or is this just magical thinking pumped up by big VC money? And how will all this iris data be protected from breaches or abuse? WorldCoin offers precious little detail on this incredibly important question.

Now is a really good time to read Eileen Guo and Adi Renaldi’s 2022 investigation of WorldCoin for MIT Tech Review. The two journalists visited six countries in Africa, Asia and Latin America to find out what exactly was happening when WorldCoin reps went out into cities and towns and tried to convince people to give them their iris data. They found that the company used deceptive marketing practices, collected more personal data than it let on — including data about things like people’s heartbeats — and did not get meaningful consent to take people’s iris scans to begin with. There’s clearly a lot more the public needs to know about this company, especially now that it’s back in the spotlight. Keep your eyes peeled.

WHAT WE’RE READING

It’s been a busy week here at Coda, so I have just one recommendation, which is Edward Ongweso Jr.’s expert takedown of venture capitalism for The Nation.

The post An AI entrepreneur bets on cryptocurrency to mitigate AI’s dangers appeared first on Coda Story.

]]>
Tech billionaires want to regulate themselves. What could go wrong? https://www.codastory.com/newsletters/sam-altman-chatgpt-hearing/ Thu, 18 May 2023 14:39:44 +0000 https://www.codastory.com/?p=43489 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Twitter’s compliance with Turkey’s censorship orders, fallout from network shutdowns in Pakistan, and Washington’s new favorite tech bro, Sam Altman.

The post Tech billionaires want to regulate themselves. What could go wrong? appeared first on Coda Story.

]]>
Turkey’s presidential race will go to a runoff, but control over Turkey’s internet clearly belongs to Erdogan. This past weekend, as Turks prepared to cast their ballots, the government appealed to Twitter to censor several hundred accounts that weren’t to its liking. Those with the biggest followings on the list belong to vocal critics of Erdogan and the ruling Justice and Development Party and to journalists like Cevheri Guven, who reports and opines on Turkish politics from exile. Travis Brown is maintaining a list of restricted accounts on GitHub.

Just like it did in India in March, Twitter complied with these requests and suspended a raft of accounts within Turkey, without missing a beat. Elon deflected critics by arguing that Twitter would have been shut down in Turkey if the company hadn’t complied. I guess he didn’t have time to think about alternatives. Yaman Akdeniz, a veteran tech and law expert from Turkey who I spoke with for this newsletter a few weeks back, tweeted that “companies like Twitter should resist the removal orders, legally challenge them and fight back strategically against any pressure from the Turkish authorities.” Indeed, prior to Musk, Twitter was not afraid to challenge these kinds of demands. But these are different times. I shudder to think what it portends for future elections everywhere.

Former Human Rights Watch head Kenneth Roth summed it up well: “Elon Musk just gave away the store,” he tweeted. “By making clear that he prioritizes Twitter’s presence in a country over the platform’s free-speech principles, he has invited endless censorship demands.” Indeed, if other states see Twitter honoring these kinds of requests, what will stop them from pursuing the same tactics? 

People are back online in Pakistan, but the country remains on edge following last week’s arrest of former Prime Minister Imran Khan, which triggered nationwide protests and street violence. In what they said was an effort to restore public order, authorities imposed a wave of network and social media shutdowns. But the chaos continued, and the shutdowns left people unable to communicate or follow the news. Pakistani digital rights expert Hija Kamran told Coda this week that “there is no evidence we can point to anywhere in the world that shows that shutdowns help to restore security.” She’s right. Researcher Jan Rydzak has even shown evidence that shutdowns tend to correlate with — and can even exacerbate — outbursts of violence and social unrest. They’re also really bad for the economy. Total cost estimates for this recent wave of shutdowns vary, but they are on the order of millions of dollars per day.

Want asylum in the U.S? There’s an app for that. Unless you’ve already tried and failed to get asylum in another country, U.S. Customs and Border Protection offers a mobile app, CBP One, that is now the only way you can sign up for an appointment and expect to have your case heard and actually considered. The Biden administration cemented these guidelines after last week’s expiration of Title 42, the Trump-era rule that strictly limited asylum applications as a response to the pandemic. But, of course, people from all over the world continue to flee dire circumstances that endanger their lives and seek asylum in the U.S. The idea that your safety might literally depend on a mobile app is unnerving — and Amnesty International says it violates international human rights law. Even worse, dozens of people who have tried to use the app say it routinely malfunctions. Stay tuned for a big piece we have coming up on this next month from Erica Hellerstein.

CHATGPT BILLIONAIRE DAZZLES AND DINES WITH US LAWMAKERS

So far, 2023 has been a big year for regulating — or thinking about regulating — AI. Last week in the EU, legislators finally nailed down key elements of the bloc’s AI Act. And China’s Cyberspace Administration released a draft regulation last month for managing generative AI, no doubt expedited by global excitement around ChatGPT. Chinese industry is already very much in the AI game, but under China’s political system, companies know better than to speed into oblivion without minding the rules of the road.

And what of the U.S.? It’s the dominant player in much of the global tech industry. But one big reason that it dominates is that, by and large, we don’t regulate.

Yes, the Biden administration has put out a “blueprint” for an AI bill of rights, and we’ve heard months of discussion about how policymakers could, maybe, sort of, think about regulating AI. But past experience with Silicon Valley companies suggests the free-for-all shall continue. And so did a hearing this week at the U.S. Senate Judiciary subcommittee. 

The hearing featured testimony from OpenAI CEO Sam Altman of ChatGPT fame, alongside IBM executive Christina Montgomery and NYU computer science professor Gary Marcus. Lawmakers focused on Altman and asked serious questions that the 38-year-old billionaire — and Stanford drop-out — answered with what seemed like pleasure. It probably helped that he’d dined with several of them the night before and evidently dazzled them with some product demos. Representative Anna Eshoo, who chairs the Congressional AI Caucus and has backed serious privacy protection bills in recent years, told CNBC that it was “wonderful to have a thoughtful conversation” with Altman. Yikes.

It was in stark contrast to other recent tech hearings where CEOs have been pummeled by legislators furious about companies exploiting people’s data, profiting off disinformation and promoting hate speech that leads to real-world violence. They seem not to realize that the issues that rightly angered them when they last grilled Meta’s Zuckerberg and Google’s Pichai, alongside a host of other problems more specific to generative AI, are totally on the table here. 

Altman said over and over that he thinks regulation is necessary — Mark Zuckerberg has often said the same — and even suggested some policy moves, like establishing a special agency that would oversee and give licenses to companies building large language models. Although his smooth talk may have given the impression that he came up with these ideas himself, experts who don’t stand to profit from the technology have pushed for much more nuanced versions of what he talked about for years. 

Perhaps it is more valuable to consider what Altman didn’t say — he made no mention of the fact that companies like his depend on the ability to endlessly scrape data from the web, in order to train and “smarten” their technologies. Where does all that data come from? You! Literally, we’re all putting information into the internet all the time, and in the U.S., there are no laws protecting that data from being used or abused, whether by private companies, political parties or anyone else. 

I talked about it with my old colleague Nathalie Marechal, who now co-leads the Center for Democracy & Technology’s Privacy and Data Project. “Trying to regulate AI without a federal comprehensive privacy and data protection law seems like a fool’s errand,” Marechal told me. “We need a data privacy law. From there, we can build on that by regulating specific ways of developing AI tools, specific applications. But without rules on how you can collect data, how you can use it, how you can transfer it, anything else to me seems like you’re skipping a step.”

She also described Altman’s moves in D.C. as a “charm offensive” and suggested that by promoting regulation at this stage, companies like OpenAI are better positioned to push some of the blame to Washington when something bad happens involving their products.

Will the U.S. ever meaningfully regulate tech? I really don’t know. But we definitely will get to see what happens when you let the people making the most money off the industry set the agenda.

WHAT WE’RE READING

  • The harms coming from AI are already clear and present, especially for people using social services or living in public housing. The Washington Post has a new investigation on the use of video surveillance and facial recognition tech in public housing developments across the U.S. Don’t miss it.
  • In a commentary piece for Jurist, Sudanese social media researcher Mohamed Suliman writes that big tech companies in the U.S. have emboldened Sudan’s RSF militia to “spread propaganda and steer public opinion in a bid to normalize their actions and conceal their crimes.” Suliman has been making this argument for years — I’m glad it’s finally getting some attention.

The post Tech billionaires want to regulate themselves. What could go wrong? appeared first on Coda Story.

]]>
Brazil takes on Big Tech in the fight against ‘fake’ news https://www.codastory.com/newsletters/brazil-fake-news-bill/ Thu, 11 May 2023 17:26:09 +0000 https://www.codastory.com/?p=43325 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Pakistanis face internet cuts after Imran Khan’s arrest and Twitter purges potential evidence of war crimes.

The post Brazil takes on Big Tech in the fight against ‘fake’ news appeared first on Coda Story.

]]>
Authorities imposed internet shutdowns across Pakistan on May 9 as protests and riots broke out nationwide following the arrest of former Prime Minister Imran Khan. Security forces arrested Khan on charges concerning a land acquisition during his time in office. But the move reflects long-standing tensions between Khan and Pakistan’s military, of which the former prime minister has become a vocal critic. Should Khan remain in custody, he will likely be barred from running for office in Pakistan’s general elections expected to be held later this year.

Alongside mobile broadband blackouts in several areas of the country, technical researchers at NetBlocks identified disruptions of Facebook, Twitter and YouTube. These kinds of outages have become a knee-jerk response of governments in many parts of the world when public unrest peaks and authorities are desperate to quell protests. But right now in Pakistan, it’s “not exactly a sign of strength,” tweeted Mohammed Taqi, a columnist for the Indian news website The Wire. Pakistani human rights lawyer and tech expert Nighat Dad called the blocks “unconstitutional” and noted that they could actually help promote the spread of disinformation online. We’ll see what effects it all has in the days to come.

Twitter continues to mediate the war in Sudan. Mohamed Suliman, who I spoke with for this newsletter a few weeks back, has been tracking digital effects of the war from afar. This week, he noted that the Twitter account belonging to the Rapid Support Forces, the paramilitary organization that is at war with the Sudanese army, was tweeting photos of people that the RSF claimed to have captured. “Does Twitter content policy allow this?” he wondered. This is a really good question. But I don’t think we can expect any answers from Twitter’s PR office, which now regularly replies to media requests with nothing but this: ????

Elon Musk also abruptly announced plans to “purge” inactive accounts. People were quick to point out that this will eliminate the historical value of accounts that were once held by major voices who have gone missing or passed away. Amnesty International’s Bissan Fakih offered a few familiar names of people missing in Syria, like Syrian human rights lawyer Razan Zeitouneh and U.S. journalist Austin Tice. “Twitter is an invaluable source of material relevant to war crimes investigations around the world, much of it from accounts dating back to 2010-on,” tweeted Charles Lister, of the Middle East Institute, a nonprofit think tank. “Simply ‘purging’ such a wealth of evidence would materially weaken the long-term pursuit of justice.”

BRAZIL TAKES ON BIG TECH

Brazil’s Congress will soon vote on a Bolsonaro-era “anti-fake news” bill that would require big tech platforms to proactively remove hate speech and disinformation, curb mass messaging by politicians and make other changes that would allow regulators to have a heavier hand in determining what kinds of material stays online in Brazil and what comes down.

The bill was fast-tracked earlier this year in the wake of a series of school shootings that raised painful questions about how hatred and incitement to violence can spill from digital spaces into the real world.

Under some regimes, the law would certainly run the risk of creating an information authority that could be used to stifle the voices of critics. But it would also require Big Tech companies to overhaul their systems for reviewing and removing harmful content and probably force them to work much more closely with regulators, akin to what’s required of them in the EU under the Digital Services Act. These are all things that will detract from their profits in Brazil, which is a super lucrative market for the industry.

It’s no surprise that Big Tech companies don’t want it to pass. At the start of May, Google ran a prominent ad on its services in Brazil asserting that the law will “make your internet worse.” Meta and Spotify have also gotten in on the action. The vote was postponed last week, and Brazil’s Supreme Court now says that all three companies will need to testify before federal police about their campaigns against the bill.

A quick look at what’s happening on Twitter right now in Brazil elucidates some of the problems at hand. This is because Twitter, as I wrote last week, seems to be doing relatively little to proactively moderate content on its platform. I talked about it recently with my old colleague Yaso Cordova, a Brazilian privacy expert who also looks at harmful content online.

“Before Elon, Twitter used to comply with Brazilian laws,” Cordova told me. But Musk dismissed the entire trust and safety team in Brazil shortly after taking the helm. From there, she said, researchers saw a rise in threats of violence and hate speech, including neo-Nazi content, which is a crime in Brazil. In the past, the company would have proactively removed harmful stuff. But not anymore.

“They might be responding very well to government content removal [requests],” she told me, referring to state-issued requests, “but the government cannot oversee everything and everyone.” If the anti-fake news bill passes, it will change the game, more or less forcing the company to comply with stricter regulation or risk getting thrown out of Brazil altogether.

Cordova emphasized the fact that under the current government, removals according to state requests might work okay. But, she said, if Bolsonaro were to be elected again tomorrow, “it would be horrible.” Were Bolsonaro to return to power in the future, she argues Brazil would see a fresh round of censorship on Twitter, targeting critical voices.

Looking at the big picture, Cordova sees Big Tech’s behavior in Brazil as an example of digital colonialism. She described to me how Silicon Valley companies have invested in developing countries with big populations, like India and Brazil, “so they can harvest these people’s data.” “They will not spend on fixing the product according to local laws,” Cordova says, “because this will not allow them to maximize revenue.”

But the anti-fake news bill could force the proverbial hand of industry on this. For many authoritative voices on the topic, the demands that the bill makes of the Big Tech companies are long overdue.

WHAT WE’RE READING

The meta-race to write the best critique of the race to build hyper-awesome AI continues. Here are two top picks from this week:

  • Writing for the Guardian, climate activist and scholar Naomi Klein argues that the AI race is really a big steal by Big Tech: “The wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) [are] unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.”
  • Sci-fi writer Ted Chiang says that we should see AI not as a genie, or as a King Midas-like figure, but rather as a management consulting firm, like McKinsey. “If you want something done but don’t want to get your hands dirty, McKinsey will do it for you,” he writes in The New Yorker. Sounds all too familiar.

And finally, the community of volunteers who run Wikipedia are at odds over whether, and how, to use large language models like Chat GPT in building and maintaining the world’s largest online encyclopedia. VICE has the details.

The post Brazil takes on Big Tech in the fight against ‘fake’ news appeared first on Coda Story.

]]>
From Turkey to India, Twitter offers censorship on demand https://www.codastory.com/newsletters/twitter-censorship-turkey/ Thu, 04 May 2023 18:16:04 +0000 https://www.codastory.com/?p=43119 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Brazil blocks Telegram over neo-Nazi chats and African content moderators vote ‘yes’ on unionization.

The post From Turkey to India, Twitter offers censorship on demand appeared first on Coda Story.

]]>
When governments come knocking, Twitter is glad to censor. Under Elon Musk, the bird company is honoring most government demands to take down tweets or hand over users’ data, according to data from Lumen and a new report from our friends at Rest of World.

It wasn’t like this before. Publicly available records suggest that the company is fully complying with 80% of government requests, in contrast to the pre-Musk era, when that number hovered around 50%. This was thanks in part to policy staffers who worked hard to figure out when government demands were really legitimate and when they were overblown. But Musk fired most of the people doing this work shortly after taking the helm. And now people around the world are feeling the consequences.

More on this below.

With the “fake news” bill looming, Brazil blocked Telegram over neo-Nazi channels.

Brazil’s Congress is poised to vote on a controversial, Bolsonaro-era “anti-fake news” bill that would require big tech platforms to proactively remove illegal content, curb mass messaging by politicians and make other changes that would give the government more leverage when dealing with foreign tech powers. Silicon Valley wants none of it — Google even used its quasi-monopolistic online presence to push its agenda in Brazil — and some free expression advocates are concerned too.

As if to offer a case in point, major internet providers in Brazil blocked Telegram on April 26, over the Dubai-based company’s refusal to hand over information about neo-Nazi activity on the platform. Police had requested data about two groups they suspect used Telegram to encourage a series of violent attacks at schools in Brazil in recent months. Telegram says the data can’t be recovered. The block on Telegram was lifted on May 2, but the company is still racking up fines to the order of $200,000 per day. For Telegram’s part, the company says it has never complied with a single data request from a government or anyone else. If the fake news bill passes, this may have to change, or Telegram may need to say, “bye bye, Brazil.”

African content moderators are uniting. A coalition of workers who clean up troublesome content for some of the world’s largest internet platforms — Meta, ByteDance (owner of TikTok) and OpenAI — voted to form the African Content Moderators Union this week. This is the latest development coming out of legal battles in Kenyan courts over the rights of content moderation workers who are typically hired by third-party companies — Sama and Majorel are two of the biggest players in Nairobi — that offer low pay and next to no benefits. Happy May Day, folks.

TWITTER’S CENSORSHIP-ON-DEMAND REGIME

So now we know: Twitter is taking governments at their word and removing most of the tweets they say are illegal — and maybe some that just rub government officials the wrong way. Meanwhile, emerging evidence shows that the company is less interested than ever in proactively removing content that violates its own policies, not to mention content that violates local laws. I’m thinking here about all the violent, hateful and otherwise nasty stuff that third-party content moderators have to deal with. At least the ones in Kenya might have union representation soon.

I asked Turkish internet law scholar Yaman Akdeniz about it this week — Turkey has made more censorship demands of Twitter than almost any other country on earth. Akdeniz noted that in the past, it was clear that the company “ignored” most requests from Turkey. “I am not sure if this will be the case with the Musk administration,” he said. The data from Lumen certainly suggests that it won’t.

He described how Turkish authorities restricted access to Twitter following the earthquake earlier this year. The response from Twitter was swift. “There was an immediate meeting,” he said, and the ban was lifted within hours. 

“I can only speculate what was promised in that meeting,” he said. “More tweets will be withheld and more accounts will be suspended, that is for sure.” And none of this bodes well for national elections, which are coming up on May 14. “Twitter can easily become the long arm of the law enforcement agencies in Turkey if AKP wins,” Akdeniz warned.

India was also at the top of the list of governments asking the company to take down tweets. In March, we wrote about Twitter’s willingness to censor tweets about the police search for a Sikh secessionist preacher in Punjab. The episode made it look as if the company was glad to do whatever the Indian state or federal authorities asked, including suspending the account of a member of the state assembly. 

If government officials can simply lean on Twitter to silence not only their critics in the public sphere but also their political opponents, the consequences for public discourse — and democracy — will be pretty severe. 

Censor when governments ask, but let the rest flow as it will. What could go wrong?

WHAT WE’RE READING

  • Israeli authorities have added facial recognition technology to their arsenal of surveillance tools used to target Palestinians in occupied territories. Amnesty Tech has a new deep dive on the Red Wolf tool.
  • It’s common in the West to think of China’s internet censorship regime as a monolith, but with a tech industry as big as China’s, controlling online information is complicated and not always consistent. Citizen Lab has a new study on search engine censorship in China that digs into the details — it is worth a read.

The post From Turkey to India, Twitter offers censorship on demand appeared first on Coda Story.

]]>
How Big Tech is ‘failing the Sudanese people’ https://www.codastory.com/newsletters/sudan-general-twitter-death/ Thu, 27 Apr 2023 14:29:26 +0000 https://www.codastory.com/?p=43014 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Elon “no bots” Musk empowers a flood of spoof accounts and OpenAI might get shut down in Europe.

The post How Big Tech is ‘failing the Sudanese people’ appeared first on Coda Story.

]]>
Mohamed Hamdan Dagalo isn’t dead. But the leader of Sudan’s Rapid Support Forces — the paramilitary organization formerly known as the Janjaweed, notorious for carrying out the genocide in Darfur in the early 2000s — was rumored to have died this week, as the RSF and Sudan’s armed forces continue to wage war against each other on the streets of Khartoum. 

It started with a tweet from what looked like the official account of the RSF. Nearly a million people saw it, and more than a thousand retweeted it. Many surely saw the account’s blue checkmark as confirmation that this was the real RSF. Except it wasn’t. @RSFSudann (note the extra “n”) has existed for years but only recently acquired a blue checkmark, thanks to Elon Musk’s new approach to “verification,” in which anyone can claim to be anyone else, so long as they’re willing to pay him $8 per month.

There was plenty of chatter about the new verification paradigm this week — long-dead public figures from Hugo Chavez to Anthony Bourdain mysteriously became verified, and a row between verified-but-not-real and unverified-but-real accounts representing New York City made for some laughs. But what happened with @RSFSudann is all too serious.

Just a few days before the false news of Dagalo’s death appeared on Twitter, hundreds of Twitter accounts began promoting and retweeting RSF content in a style that bore many hallmarks of a coordinated disinformation campaign. 

Researchers at the Atlantic Council — working with data from Beam Reports, a Sudanese media and fact-checking organization — identified 900 Twitter accounts that seemed to be caught up in the operation to boost the RSF on social media. While this particular burst of online activity is new, the RSF has made good use of big tech platforms for years to bolster its public image. Mohamed Suliman, a senior researcher at Northeastern University’s Civic AI lab, who hails from Sudan, called this out just as the fighting began in mid-April.

“American big tech companies such as #twitter and #facebook failed the Sudanese people by allowing the RSF militia pages and accounts to exist during the last years,” he tweeted

Suliman and I chatted over email this week about the role of big tech companies in the conflict.

“The RSF militia has been using social media platforms as PR tools to normalize its existence,” Suliman told me. “Their Facebook pages, Twitter accounts, even YouTube channels were used to polish the bad image people have about their bloody background in Darfur and [in the 2019] Khartoum massacre.” 

The Sudanese army is present on social media, too, but by Suliman’s estimation, it has “no coherent social media strategy.” In the current situation, the disinformation — whether it’s a bogus tweet claiming the general is dead or one claiming that attacks have taken place where they haven’t — could affect how the fighting plays out and how civilians make decisions about where to take shelter or how to traverse dangerous territory.

As combat continues across Sudan, it is becoming more and more difficult for people to access reliable information about what’s happening. Internet connections are faltering or collapsing altogether, journalists are struggling to report the news safely and social media is a jumble of real news, hearsay and propaganda. With blue ticks available to anyone for a fee, it’s become exponentially harder to know who’s really speaking.

IN GLOBAL NEWS

OpenAI is under fire in the EU. ChatGPT and its hot younger friend, GPT-4, have captured the curiosity of millions of internet users worldwide, but authorities in a growing number of countries say the chatbots are also capturing our personal data without our consent — and quite possibly breaking the law along the way. Italy preemptively blocked ChatGPT on these grounds a few weeks ago. And now data protection regulators in France, Germany and Ireland, alongside the European Data Protection Board, are investigating OpenAI, the tools’ parent company. MIT Tech Review’s Melissa Heikkila explained that at this stage, if OpenAI wants to avoid big fines — or even a big ban — in the EU, it will have to find some way to prove that it had a “legitimate interest” in hoovering up everyone’s info in the first place.

The company is probably safe for now in the U.S., since we still have no comprehensive data protection laws here. If policymakers want to change this, they better act fast, say researchers at New York University’s AI Now Institute, a handful of whom just finished a stint advising the Federal Trade Commission on tech policy. In a new position paper on mass data collection and AI, they highlight the harms that we already know about, ranging from election interference to algorithmic discrimination by public housing and health agencies, and offer some policy prescriptions.

A student was arrested for “inciting Hong Kong independence” on social media. The young woman allegedly posted the message while studying in Japan. When she returned home to Hong Kong last month, she was confronted by the police and charged under the country’s controversial National Security Law. Deutsche Welle reported that this is the first known arrest of a Hong Konger for breaking the law while outside of Hong Kong’s jurisdiction.

WHAT WE’RE READING

  • Twitter’s changes this week correlated — and maybe caused — with a bump for state-controlled media outlets like Russia’s RT and China’s CGTN (formerly CCTV). The Atlantic Council’s Digital Forensic Research Lab has a quick new study on the shift.
  • AI translation tools are creating new problems for asylum seekers. Rest of World reported this week on how error-riddled machine translations of Pashto and Dari languages are causing Afghan refugees’ asylum applications to be delayed and, in one case, denied altogether.

The post How Big Tech is ‘failing the Sudanese people’ appeared first on Coda Story.

]]>
Cuban journalists are being silenced, one mobile line at a time https://www.codastory.com/newsletters/cuba-etecsa-phone-access/ Fri, 21 Apr 2023 13:53:40 +0000 https://www.codastory.com/?p=42727 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: The internet goes dark as conflict erupts in Sudan, Turkish politicians get on the disinfo beat.

The post Cuban journalists are being silenced, one mobile line at a time appeared first on Coda Story.

]]>
Residents of Khartoum found themselves without internet access on April 16, as violent clashes broke out between the Sudanese army and the Rapid Support Forces, a paramilitary group descended from the militias that perpetrated genocide in Darfur in the early 2000s. The warring entities have been in a power struggle since the ousting of former president Omar Al-Bashir in 2019 that has since escalated amid negotiations over Sudan’s attempts to transition to democracy. The internet blackout was ordered by Sudan’s telecom regulator and was implemented by at least one telecom carrier, MTN, which holds 37% of the local market. Two officials from the South Africa-based telco confirmed these reports, according to Al Jazeera. The blackout lasted only a few hours. Another outage, this time on Canar Telecom, was recorded by technical researchers on April 19.

Communications blackouts are scary in violent conflict situations, especially in places where mobile messaging services like WhatsApp dominate person-to-person communication. They leave people unable to seek shelter or medical attention or to find out if their loved ones are safe. At least this outage was mercifully brief. And it’s nothing new for people in Sudan. During the protests that brought down Al-Bashir, and the 2021 military coup, nationwide internet blackouts went on for days, and sometimes weeks, at a time.

Meanwhile in the U.S., Discord, the online discussion platform popular among gamers, was thrust into the national security spotlight last week when news broke that a young military officer named Jack Teixeira had published more than 100 classified documents, most of them related to U.S. strategy around the war in Ukraine, on a Discord server. Although it sounds like Discord has been quick to cooperate with authorities and to explain at least some of its response to the public, I’m still wondering whether it will become a new target of attempts to regulate tech platforms. Time will tell.

And the clock is also ticking in Turkey, where disinformation is peaking in the lead up to national elections on May 14. Speaking at the National Press Club in Washington, D.C., the Turkish government’s communications director, Fahrettin Altun, voiced concern about disinformation in the wake of the February earthquakes and as polling day approaches. He might mean something different than we do. This week, Middle East Eye reported that at least 12,000 Twitter accounts were reactivated in Turkey, with most using either Russian or Hungarian as their primary language — sure signs of troll farms preparing to manipulate people’s understanding of what is and isn’t happening around them. We’ll be keeping close watch on the disinfo machine there as elections approach.

CUBAN AUTHORITIES OFFER ‘PERSONALIZED’ CENSORSHIP

Another election took place recently that didn’t make big headlines: Cubans voted for representatives to the country’s national assembly, which cemented a second term in office for Miguel Díaz Canel, the successor of the Castro dynasty. While every person in office there is either a member of, or sympathetic to, the Cuban Communist Party, this is far from true of all Cubans, many of whom did not vote at all.

It was the first national election since the 2020-2021 social movement that saw public demonstrations erupt across the country at a level never before seen in Cuba’s history. Digital activism and independent online media work played a big role in the movement and continue to fuel it, albeit more quietly than two years ago.

State repression of people doing this work has been a constant source of struggle. But thanks to these networks, many more people now have seen and heard firsthand accounts of the thuggish realities of living under Cuban state security. Hundreds of Cubans have been arrested for demonstrating, criticizing state policies and practice and reporting on human rights violations. Estimates and definitions vary, but at the end of 2022, there were at least 1,000 people jailed or serving time on politically-driven charges. 

Those who are not behind bars continue to use a combination of tactics on the street and on their screens to show what’s happening day to day. Cubalex, a local independent group that monitors rule of law violations, kept an open record of “incidents of repression” leading up to elections. Much of what’s documented here feels like garden-variety authoritarianism — street surveillance, police stops, brief detentions. But one tactic has an extra special flavor, something unique to a small state — Cuba’s sole telecommunications provider, ETECSA, has been cutting off individual mobile phone service when journalists or activists get too vocal online.

I took a look at the issue last week, with the help of Yucabyte, a Cuban media and activism site, and its founder, Norges Rodriguez. The tactic, he explained, is nothing new. They’ve been cutting off people’s communication lines, and fixed line internet, since 2003. But the protests triggered a new wave of cuts, as did the recent election. Some activists have tried approaching ETECSA to find out what’s going on.

“The response is always that you need to change your SIM card, or that there are technical problems. Or they just don’t have a response,” Rodriguez told me. “They never say that the service outage is motivated by what a person posted on social networks. ETECSA will not get into that.”

In an interview in Yucabyte, veteran independent journalist Luz Escobar talked about her experiences having her phone line cut.

“If ETECSA cuts off your service, you have no way to connect,” she said. A seasoned reporter with 14yMedio, one of the country’s best-known independent media outlets, Escobar was detained for her reporting in 2020 and had her mobile line cut off repeatedly before she left the country last year.

“They know the phone is a powerful tool. That’s why they’re so afraid of it, and why they always make you lose time and money, because they know how difficult it is to get a mobile phone in Cuba, and they know how difficult it is to do the most basic things, like download an app,” Escobar said.

Indeed, if you’ve been wondering what’s so hard about getting a new SIM card or a new phone altogether, consider the country’s economic crisis. It has always been hard and expensive to get and maintain hardware — now it’s even more so.

For human rights activist Abu Dayanah, these tactics will ultimately only put more pressure on the authorities: “When there’s more persecution, there’s more resistance,” he says. Dayanah’s mobile line has been cut off continuously since April of 2021. But it has not kept him quiet.

I’ll close by sending kudos to the digital forensic sleuths at Citizen Lab for their recent release on the surveillance tech firm QuaDream. Since the research group published evidence of the spyware company’s “zero-click” attack capabilities the company has terminated its operations altogether. That is impact.

The post Cuban journalists are being silenced, one mobile line at a time appeared first on Coda Story.

]]>
Russia now sends men to war with an electronic summons https://www.codastory.com/newsletters/russia-electronic-draft/ Thu, 13 Apr 2023 17:47:40 +0000 https://www.codastory.com/?p=42557 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

The post Russia now sends men to war with an electronic summons appeared first on Coda Story.

]]>
Earlier this week, Russian legislators voted in favor of a new digital draft. Conscripts, currently all men between the ages of 18 and 27, will now be called up electronically, as will other men eligible to serve. Hundreds of thousands of Russian men fled the country last September in response to Putin’s “partial mobilization” of citizens to fight the war in Ukraine. The Russian parliament wants to make sure this doesn’t happen again. 

New amendments to the laws on conscription will make it illegal to ignore electronic military summons. As soon as they are posted to a person’s e-government account (known in Russia as Gosuslugi), they will be considered “received.” This is a big change. Previously, draft officers had to physically hand a person their summons before he could be considered a conscript. But the new rules mean conscripts who fail to enlist within just seven days of receiving the summons on their Gosuslugi account will be banned from leaving the country and have their assets frozen.

Conscripts who ignore their electronic summons, or risk fleeing the country, will be considered fugitives. The legislation will also create a unified registry of citizens eligible for military service. I’ve heard from Russian colleagues who are concerned that such a registry might be used or abused by the Kremlin and, given Russia’s notoriously leaky systems, might be open to exploitation by other malicious actors. They’re right to worry. From Kenya to India to South Korea, we’ve seen too many examples of how citizen registries can be compromised.

India’s IT Ministry says it will form a “fact-checking unit” that will review online news related to the government, flag stories or information it deems “fake” and then order their removal. This will require everyone from small news sites to big online platforms like Facebook and Twitter to take down these kinds of posts or risk litigation. 

It’s hard to imagine small outfits withstanding these requests. But there is precedent for big companies to push back against mechanisms like this. Under its previous leadership, Twitter had at least some appetite for standing up to overly broad censorship orders from governments, including India — the company took the Indian government to court over their requests to censor tweets about the farmers’ protests in 2021. But the Musk regime seems glad to honor legal requests in India, regardless of public interest. Earlier this year, we reported on a flurry of locally-censored tweets and account suspensions that came at the government’s behest, amid public unrest in Punjab. And just this week, the company appears to have censored two tweets from an Indian journalist. One of the tweets apparently quoted the Home Minister Amit Shah. Ordinarily, when tweets are taken down because of a legal request from a government, the censorship is restricted only to the country in question. Users in other countries, or with the aid of a VPN, would still be able to see the tweets blocked in India. But in this case, the block was applied worldwide, so no one anywhere could see the tweets.

Though there has been some speculation, it is not clear what the tweets actually said — a classic problem with swift, wholesale censorship in these situations is that the public has no way of knowing what triggered the response to begin with. If Twitter (and other big multinational social media platforms) were required to put all of its content into a searchable public archive — or at least archive speech by state officials — it would enable journalists and researchers to get to the bottom of mysteries like this one. 

Twitter’s apparent willingness to indulge the Indian government is a reminder that Twitter is a private enterprise – not a public square. In fact, Twitter is no longer even an independent company. Last week it came to light that Elon Musk had folded Twitter, Inc. into another company he owns, known only as X, presumably in tribute to the multibillionaire’s favorite letter. Incorporated in Delaware, the U.S. state that most closely resembles a tax haven, X may be part of Musk’s stated desire to turn Twitter into an “everything” app, akin to China’s WeChat, that enables payments, ride-sharing, banking and more, and as such will probably lead to antitrust probes from Washington. Slate has a helpful breakdown of the business side of the story.

Armenia wants to up its digital censorship game. Proposed amendments to the current martial law regime, in place due to the country’s ongoing conflict on its border with Azerbaijan, would give the government the power to impose “temporary suspension (blocking) of websites, social networks, Internet applications, as well as partial or complete restriction of Internet access in the territory of the Republic of Armenia.” There is precedent for these kinds of restrictions — when violence in the contested Nagorno-Karabakh region escalated last September, TikTok was blocked on both the Armenian and Azerbaijani sides. Dozens of NGOs signed an open letter this week, hosted by Access Now, calling for this and related amendments to be taken off the table.

WHAT WE’RE READING

  • Russia and China are often mentioned in the same breath when we talk about tech and authoritarianism, though their tactics don’t always align. But a new investigation from Radio Free Europe/Radio Liberty has brought to light evidence of cooperation between the Cyberspace Administration of China and Roskomnadzor, the Russian state agency charged with policing the internet.
  • Citizen Lab put out a first look at QuaDream, a spyware company that has far less name recognition but many of the same terrifying tricks — including zero-click exploits — as Israel’s NSO Group. 
  • And in a new essay for Tech Policy Press, Data & Society’s Jenna Burrell writes that generative AI is nothing short of a “Marxist nightmare: the work of millions accruing to a few capitalist owners who pay nothing at all for that labor.”

The post Russia now sends men to war with an electronic summons appeared first on Coda Story.

]]>
Amid political crisis, Tunisia’s Saied targets his critics online https://www.codastory.com/newsletters/tunisia-civil-society-online-harassment/ Thu, 06 Apr 2023 14:35:04 +0000 https://www.codastory.com/?p=42367 Authoritarian Tech is a weekly newsletter tracking how people in power are abusing technology and what it means for the rest of us.

Also in this edition: Biden’s toothless spyware ban and the digital side of Uganda’s new anti-LGBTQ law.

The post Amid political crisis, Tunisia’s Saied targets his critics online appeared first on Coda Story.

]]>
There’s new evidence that the U.S. government is using NSO Group’s surveillance tools. We already know about the FBI’s purchase of NSO’s infamous Pegasus spyware back in 2019. But this week, documents reviewed by the New York Times showed that the Israeli surveillance tech giant went under contract in 2021 with a company called Riva Networks that was operating as a front for the U.S. government. This gave government agents — the documents don’t disclose what agency or department they worked with — access to a geolocation tool built by NSO that would allow agents to track anyone through their mobile device, without their knowledge. 

White House staff say they knew nothing of it before the Times’ story and that the contract — which appears to remain active — stands in violation of U.S. Commerce Department sanctions. The revelations shouldn’t be surprising — there have been other clues about NSO working its way into U.S. government contracts. And they make the Biden administration’s recent ban on commercial spyware look like quite the toothless tiger.

Maybe Pegasus isn’t really so bad? The Indian government says it is a “PR problem,” more than anything else. Last week, the Financial Times reported that officials at India’s Ministry of Defense put out a tender for new spyware, with the express intention of contracting with a surveillance technology company that is “less controversial” than NSO Group. They are willing to pay up to $120 million for it. Pegasus has been found on the devices of journalists, human rights defenders and opposition politicians in countries around the world, including India. When Coda’s podcast team dug into the use of Pegasus in India, we talked with 16 Indian lawyers and activists, many involved in representing Dalit and indigenous communities, whose phones were infected with the spyware. Soon afterward, they were accused of plotting to bring down the Modi government. For a compelling look at the real-life effects of this technology, give this episode a listen.

There will be likely digital ramifications to Uganda’s new law criminalizing homosexuality. Although same-sex relationships have long been outlawed in Uganda, the country’s new Anti-Homosexuality Bill, which is almost sure to pass, will criminalize LGBTQ identity itself. It also covers a host of actions associated with LGBTQ people’s rights, including the act of “promoting” or speaking about LGBTQ issues in both traditional and digital media. Journalists in the country are taking note. In an interview with Deutsche Welle, a reporter who wished to remain anonymous suggested that the law will make it nearly impossible to cover topics affecting LGBTQ communities. “It is like they are telling us to leave them: don’t touch them,” she said.

TUNISIA’S PERPETUAL STATE OF EMERGENCY

Attacks on Tunisian civil society and labor groups are on the rise on social media. The country’s political crisis seems to be scaling new heights, amid rising rates of inflation and unemployment and political cleavages exacerbated by President Kais Saied’s 2021 decision to dissolve the government and rule by decree. The president drew fierce public condemnation over a February speech in which he blamed aspects of the country’s overlapping crises on migrants from sub-Saharan Africa, using inflammatory, racist language and accusing Black African migrants of bringing “violence and criminality” to the north African country.

Journalists, union leaders and NGO workers who have been covering or speaking out about these issues are facing wave after wave of smear campaigns, mainly on Facebook, that they say mirror Saied’s rhetoric. Ramadan Ben Omar, who works on migrant rights issues with the Tunisian Forum for Economic and Social Rights, has steadily spoken out against Saied’s anti-migrant rhetoric. On Facebook, Ben Omar has since been accused of treason and “undermining the state’s image internationally.” In an interview with the Beirut-based digital rights group SMEX, Ben Omar explained that harassment campaigns against him have “spread from Facebook into the real world.” “Some accused me of treason simply because I supported irregular migrants and my criticism of the state’s policy towards undocumented migration,” he said. 

Looming large for anyone criticizing the current regime is Tunisia’s relatively new cybersecurity decree, issued in late 2022. Under Decree 54, anyone found guilty of “deliberately using communication networks and information systems to produce, promote, publish or send false information or rumors” can face up to five years in prison and fines of up to $16,000. The law also gives authorities broad latitude to monitor the communications of suspected violators.

Laws like these have become all too common for regimes that fear the power of their critics, and they become extra potent in situations like this one. Tunisia has been in a legal “state of emergency” almost continuously since the 2011 revolution that brought the ousting of longtime president Ben Ali — a moment that made Tunisia a beacon of hope for real democratic progress in the region and one that was spurred in part by digital organizing. Then again, anyone who was part of Tunisian civil society before the 2011 revolution knows the tactics well. Some of the tools have changed — social media and digital surveillance have come a long way since then — but the authoritarian playbook is largely the same.

WHAT WE’RE READING

  • For more on Tunisia’s trajectory since the 2011 revolution, I recommend this recent essay for Global Voices by Saoussen Ben Cheikh, in which she shows how the state of emergency has become a “permanent feature of governance.”
  • U.S. immigration authorities have been using administrative subpoenas to demand data from public schools, abortion clinics and even media organizations. WIRED has all the details, in a great new investigation by Dhruv Mehrotra.
  • More shots have been fired in the AI ethics war, which seems to feature three camps — the fast-moving AI bros, the slightly more sensitive bros who want to exercise some caution while still profiting off AI and people focused on the actual harms that AI is already causing. Prominent members of the last group authored the 2021 Stochastic Parrots paper that led to Google’s firing of Timnit Gebru. They now work together at the DAIR Institute and put out an open letter last week, in response to an earlier letter, from the sensitive bros. Both are worth a read. If you’re wondering how any of this matters for the rest of us, check out this roundup piece from TechCrunch.

The post Amid political crisis, Tunisia’s Saied targets his critics online appeared first on Coda Story.

]]>