Why are AI software makers lobbying for kids’ online safety laws?

Ellery Roberts Biddle

 

THINK OF THE CHILDREN

Last week, the U.K. passed the Online Safety Bill, a law that’s meant to help snuff out child sexual exploitation and abuse on the internet. The law will require websites and services to scan and somehow remove all content that could be harmful to kids before it appears online. 

This could fundamentally change the rules of the game not only for big social media sites but also for any platform that offers messaging services. A provision within the law requires companies to develop technology that enables them to scan encrypted messages, thus effectively banning end-to-end encryption. There is powerful backing for similar laws to be passed in both the U.S. and the European Union.

Scouring the web in an effort to protect children from the worst kinds of abuse sounds like a noble endeavor. But practically speaking, this means the state would be surveilling literally everything we write or post, whether on a public forum or in a private message. If you don’t already have a snoopy government on your hands, a law like this could put you just one election away from a true mass surveillance regime of unprecedented scale. Surely, there are other ways to keep kids safe that won’t be quite so detrimental to democracy.

As a parent of two tiny children, I feel a little twinge when I criticize these kinds of laws. Maybe the internet really is rife with content that is harmful to children. Maybe we should be making these tradeoffs after all. But is kids’ safety really what’s driving the incredibly powerful lobbying groups that somehow have a seat at every table that matters on this issue, from London to D.C. to Brussels?

It is not. This week, Balkan Insight dropped a deeply reported follow-the-money investigation of the network of lobbying groups pushing for this kind of “safety” legislation in Europe that made a connection that really ought to be on everyone’s radar: The AI industry is a major lobbying force driving these laws.

The piece takes a hard look at Thorn, a U.S. organization that has been a vocal advocate for children’s online safety but that has also developed proprietary AI software that scans for child abuse images. Thorn seems to be advocating for companies to scan every drop of data that passes through their servers with one hand and then offering the perfect technical tool for said scanning with the other. It’s quite the scheme. And it seems to be working so far — the U.K. law is a done deal, and talks are moving fast in the right direction for Thorn in Europe and the U.S. Oh, and the U.S. Department of Homeland Security is already among Thorn’s current clients.

As a number of sources quoted in the Balkan Insight investigation point out, these laws might not even be the best way to tackle child exploitation online. They will require tech companies to break encryption across the internet, leaving people vulnerable to all kinds of abuse, child exploitation included. This level of surveillance will probably send the worst predators into deeper, darker corners of the web, making them even harder to track down. And trying to scan everything is often not the best way to trace the activities of criminal groups. 

I’m sure that some of the people pushing for these laws care deeply about protecting kids and believe that they are doing the best possible thing to make them safer. But plenty of them are driven by profit. That is something to worry about.

GLOBAL NEWS

The internet was barely accessible last week in the disputed territory of Nagorno-Karabakh, where Azerbaijani military troops have effectively claimed control of the predominantly ethnic Armenian region. Tens of thousands of Karabakhi Armenians are fleeing the mountainous region that abuts the Azerbaijani-Armenian border in what one European MEP described as a “continuation of the Armenian genocide.” The role of Russia in the conflict (which amid the war in Ukraine seems to have withdrawn its long-time support for the Armenian side) and the importance of Azerbaijan to Europe as a major oil producer have dominated most of the international coverage. But the situation for people in the region is dire and has largely been ignored. The lack of basic digital connectivity isn’t helping — researchers at NetBlocks showed last Thursday that Karabakh Telecom had almost no connectivity from September 19, when the full military offensive launched, until September 21, when Armenian separatist fighters surrendered. TikTok was also blocked during this period. 

Azerbaijani authorities are also taking measures to ensure that their critics keep quiet online. Several Azerbaijani activists and journalists who have posted critical messages or coverage of the war on social media have been arrested for posting “prohibited” content.

An internet blackout has also gone back into effect in Manipur, India, just days after services were restored. An internet blackout has been in effect in Manipur since the beginning of May, as nearly 200 people have died in still ongoing ethnic violence. This blackout was finally lifted last weekend. But protests in Imphal, the capital city of the northeastern state that borders Myanmar, erupted this week after photos of the slain bodies of two students who had gone missing in July surfaced and went viral on social media. Now the Manipur government, which has largely failed to contain the violence, even as its critics accuse it of fomenting clashes, has said disinformation, rumors and calls for violence are being spread online, necessitating another shutdown. An order from the state governor’s office, which has been circulating on X, says the shutdown will last for another five days. Indian authorities frequently shut down the internet in embattled states, despite the cost to the economy — an estimated $1.9 billion in the first half of this year alone — and the apparent lack of effect on public safety.

Speaking of shutdowns, there’s new hope that Amazon might have to shutter some part of its business or at least clean up its practices. This week, the U.S. Federal Trade Commission, alongside 17 state attorneys general, filed a massive lawsuit accusing the e-commerce behemoth of inflating prices, degrading product quality, and stifling innovation. These practices hurt both consumers and third-party sellers, says the FTC, who have little choice but to sell their goods on Amazon’s platform. This is a bread-and-butter anti-monopoly case — it doesn’t rely on the pioneering legal theories the FTC Chair Lina Khan is known for. In legal scholar and former White House tech advisor Tim Wu’s view, “The FTC complaint against Amazon shows how much, over the last 15 years, Silicon Valley has understood and used scale as a weapon. In other words, the new economy relied on the oldest strategy in the playbook — deny scale to opponents.” I couldn’t agree more.

From biometrics to surveillance — when people in power abuse technology, the rest of us suffer

More Coda Newsletters