The surveillance industrial complex is thriving at the border

Ellery Roberts Biddle

 

On Tuesday, the European Court of Human Rights issued a pivotal ruling on mass surveillance that should have implications in the U.K. and beyond. The court found that plaintiffs Claudio Guarnieri and Joshua Wieder, both experts on data protection and surveillance, “reasonably” believed that the GCHQ, the U.K.’s main intelligence agency, had intercepted their data under its bulk data collection regime.

Guarnieri and Wieder originally brought their case to the U.K.’s Investigatory Powers Tribunal in 2016, in what amounted to a test of the system in the wake of the Edward Snowden revelations, which exposed the large-scale spy programs of not only the U.S., but also the U.K., Australia, Canada and New Zealand governments. When the Tribunal refused to hear their case, they took it to Strasbourg. Even though the two plaintiffs aren’t U.K. citizens, the court decided they still had some baseline rights to privacy under the European Convention on Human Rights.

There’s a difference between governments hoovering up data as a routine practice and immigration agencies tracking individuals after they cross a border, but the case should set some precedent concerning the data privacy rights of non-U.K. citizens once they’re in the U.K. What might this mean for migrants coming to the U.K. from across the globe in pursuit of a better life? In a world where everyone depends on internet-based tools to communicate, travel, work and earn money — tools that collect gobs of data about us along the way — the question feels pertinent.

The surveillance industrial complex should be top of mind in the U.S. too, as we learn more about border security and management agencies’ exploitation of digital data to surveil people trying to enter the U.S. In Texas, it came to light in late August that a group of Texas National Guard members — acting within Governor Greg Abbott’s controversial state-run border mission — had carried out an unauthorized spy operation in which they deliberately infiltrated WhatsApp groups used by migrants and smugglers to communicate about their routes. 

I’m not sure which is worse — WhatsApp infiltration or border agencies creating fake social media profiles in order to “research” people who are seeking residency in the U.S. through established legal channels. The latter strategy, by the way, has been deployed not as part of some rogue Texas border operation but under the auspices of the U.S. Department of Homeland Security. Critical details about the program surfaced last week, thanks to a series of open records requests filed and obtained by NYU’s Brennan Center for Justice. 

On a somewhat brighter note, last week, U.S. Customs and Border Patrol publicly vowed to stop buying troves of people’s location information from data broker companies like Venntel by the end of this month. How are third-party companies you’ve probably never heard of getting their paws on your data? Too often, when you sign up for a new digital service and “agree” to its terms and conditions, you have no choice but to authorize the service to sell your data to companies like Venntel, which will analyze and repackage it for sale to the highest bidder. At least soon, if they do as they’ve promised, CBP won’t be one of them.

GLOBAL NEWS

Pegasus, one of the world’s most pernicious surveillance technologies, infected the iPhone of acclaimed Russian journalist Galina Timchenko. On Wednesday, researchers at the University of Toronto’s Citizen Lab and Access Now released technical evidence that Timchenko’s phone was compromised in February 2022. This is big, not only because of Timchenko’s unique position as the co-founder of the leading Russian independent media outlet Meduza, which operates out of Latvia, but because Pegasus, built by Israeli spyware firm NSO Group, has publicly stated that it won’t deploy its products in Russia or the U.S., or against people from these countries, presumably due to pressure from the Israeli government. In Meduza’s coverage of the revelations, Timchenko described feeling both terrified and defiant about the discovery. “Just what were they planning to find? They put me under a magnifying glass, hoping to catch something… Go ahead and watch, you creeps! Feast your eyes,” she said.

Experts have been saying it for a while, and the public is catching up: AI is going to mess with elections. A new Axios-Morning Consult poll shows that half of Americans think AI will help spread disinformation in the lead up to the 2024 general election in the U.S. and that this will affect election outcomes. They’re right to worry, especially since X (formerly known as Twitter) is planning to open the floodgates and reinstate political advertising on its platform. Though it is growing crummier by the day, I think it’s safe to assume that what appears on X will still have a significant impact on what the media decide to cover and what voters believe to be true. And I’m not sure if X is actually shadowbanning the New York Times, but Musk’s attacks on the newspaper, and the fact that their traffic from Twitter has dropped substantially since late July, don’t look good. While it’s one among many reliable sources out there, it’s icky to think that U.S. voters might be less likely to read the New York Times because Elon tweaked the system out of spite.

U.S. v. Google: It’s finally happening. The U.S. Department of Justice will officially see Google in court this week, in the first of three upcoming antitrust cases against the $1.7 trillion tech behemoth and the first such case brought against any major tech company since the government sued Microsoft in 1998. This case will focus on Google’s search engine, which, the DOJ argues, the company has unfairly elevated to monopoly status by brokering deals with mobile phone and browser service providers to set Google as their default search engine. The company commands 90% of the search engine market in the U.S., and 94% of it globally. Google argues that it simply offers the best service in the industry and people use it because they love it. 

Tech Policy Press and Ars Technica have put out helpful “what to watch for” pieces about the trial. But the trouble is, the public won’t be able to watch for much, since Google convinced Judge Amit Mehta to keep the trial closed to the public, on the grounds that the company’s precious “trade secrets” might otherwise be compromised. Early next year, the courts will hear another case that the DOJ is bringing against the company, concerning its hyperdominance of the online advertising market. I’m even more excited for this one.

WHAT WE’RE READING

  • I am crushing hard on 404 Media, a new tech news venture of VICE Motherboard alums like Joe Cox, Jason Koebler and Samantha Cole, who wrote this excellent and hilarious piece about the scourge of AI-generated mushroom foraging books on Amazon. The president of the New York Mycological Society says the books offer imprecise or flat-out wrong advice on what to pick and what to avoid. The TLDR here is that if you eat the wrong mushroom, you will die. So consider the source!
  • On that note, I think my friend Ethan Zuckerman is right to worry about AI getting to train itself. He’s written a piece about it for Prospect.
  • And as usual, I am all for popping the chatbot hype balloon, which Sara Goudarzi is conveniently advocating at the Bulletin of the Atomic Scientists. Give her essay a read.

From biometrics to surveillance — when people in power abuse technology, the rest of us suffer

More Coda Newsletters