For OpenAI’s CEO, the rules don’t apply

Ellery Roberts Biddle

 

Since my last newsletter, a shakeup at OpenAI somehow caused Sam Altman to be fired, hired by Microsoft, and then re-hired to his original post in less than a week’s time. Meet the new boss, literally the same as the old boss.

There are still a lot of unknowns about what went down behind closed doors, but the consensus is that OpenAI’s original board fired Altman because they thought he was building risky, potentially harmful tech in the pursuit of major profits. I’ve seen other media calling it a “failed coup”, which is the wrong way to understand what happened. Under the unique setup at OpenAI — which pledges to “build artificial general intelligence (AGI) that is safe and benefits all of humanity” — it is the board’s job to hold the CEO accountable not to investors or even to its employees, but rather to “all of humanity.” The board (alongside some current and former staff) felt Altman wasn’t holding up his end of the deal, so they did their job and showed him the door.

This was no coup. But it did ultimately fail. Even though Altman was part of the team that created this accountability structure, its rules apparently no longer applied to him. As soon as he left, his staff apparently threatened to quit en masse. Powerful people intervened and the old boss was back at the helm in time for Thanksgiving dinner. 

Now, OpenAI’s board is more pale, male and I dare say stale than it was two weeks ago. And Altman’s major detractors — Helen Toner, an AI safety researcher and strategy lead at Georgetown University’s Center for Security and Emerging Technology, and Tasha McCauley, a scientist at the RAND Corporation — have been shown the door. Both brought expertise that lent legitimacy to the company’s claims of prioritizing ethics and benefiting “all of humanity.” You know, women’s work. 

As esteemed AI researcher Margaret Mitchell wrote on X, “When men speak up abt AI&society, they gain tech opportunities. When non-men speak up, they **lose** them.” A leading scholar on bias and fairness in AI, Mitchell herself was famously fired by Google on the heels of Timnit Gebru, whose dismissal from Google was sparked by her critiques of the company’s approach to building AI. They are just a few of many women across the broader technology industry who have been fired or ushered out of powerful positions when they raised serious concerns about how technology might affect people’s lives.

I don’t know exactly what happened to the women who were once on OpenAI’s board, but I do know that when you have to do a ton of extra work simply to speak up, only to be shut down or shown the door, that’s a raw deal. 

On that note, who’s on Altman’s board now? Arguably, the biggest name is former U.S. Treasury Secretary Larry Summers, who used to be the president of Harvard University, but resigned amid fallout from a talk he gave in which he “explained” that women were underrepresented in the sciences because, on average, we just didn’t have the aptitude for the subject matter. Pick your favorite expletive and insert it here! Even though Summers did technically step down as president, the university still sent him off with an extra year’s salary. He has since continued to teach at Harvard, made millions working for hedge funds and become a special adviser at kingmaker venture capital firm Andreessen Horowitz. And now he gets to help decide the trajectory of what might be the most consequential AI firm in the world. That is a sweet deal.

The other new addition to the board is former Salesforce Co-CEO Bret Taylor, who was on the board of Twitter when it was still Twitter. There, Taylor played a major role in forcing Elon Musk to go through with his acquisition of the company, though Musk had tried to back out early in the process. This was good for Twitter’s investors and super terrible for everyone else, ranging from Twitter’s employees to the general public who had come to rely on the service as a place for news, critical debate and coordination in public emergencies. 

In Twitter’s case, there was no illusion about benefiting “all of humanity” — the board was told to act on investors’ behalf, and that’s what it did. It shows just how risky it is for us to depend on tech platforms run by profit-driven companies to serve as a quasi-public space. I worry that OpenAI will be next in line. And I don’t see this board doing anything to stop it.

GLOBAL NEWS

Thousands of Palestinians in the Israeli-occupied West Bank have been arrested since Oct. 7, some over things they’ve posted — or appear to have posted — online. One notable figure among them is Ahed Tamimi, a 22-year-old who has been a prominent advocate against the occupation since she was a teenager. Israeli authorities raided Tamimi’s home in early November and arrested her on accusations that she had written a post on Instagram inciting violence against Israeli settlers. The young woman’s family denied that Tamimi had posted the message, explaining that the post came from someone impersonating her, amid an online harassment campaign targeting the activist. Since her arrest, she has not been charged with any crime. On Tuesday, Tamimi’s name appeared on an official list of Palestinian detainees slated for release.

Israeli authorities have been quick to retaliate against anything that might look like antisemitic speech online — unless it comes from Elon Musk. The automotive and space-tech tycoon somehow managed to get a personal tour of Kfar Aza kibbutz — the scene of one of the massacres that Hamas militants committed on Oct. 7 — from no less than Prime Minister Benjamin Netanyahu himself this week. Just days prior, Musk had been loudly promoting an antisemitic conspiracy theory about anti-white hatred among Jewish people on X, describing it as “the actual truth.” Is Netanyahu not bothered by the growing pile of evidence that Musk is comfortable saying incredibly discriminatory things about Jewish people? As with Altman, the rules just don’t apply when you’re Elon Musk.

And there was a business angle for Musk’s visit to Israel. He has a habit of waltzing into cataclysmic crises and offering up his services. It’s always billed as an effort to help people, but there’s usually a thinly veiled ulterior geopolitical motive. While in Israel, he struck a deal that will allow humanitarian agencies in Gaza to use Starlink, his satellite-based internet service operated by SpaceX. Internet connectivity and phone service have been decimated by Israel’s war on Gaza, in which airstrikes have destroyed infrastructure and the fuel blockade has left telecom companies all but unable to operate. So Starlink could really help here. But in this case, it will only go so far. Israel’s communications ministry is on the other end of the agreement and has made it clear that access to the network will be strictly limited to aid agencies, arguing that a more flexible arrangement could allow for Hamas to take advantage. Journalists, local healthcare workers and just about everyone else will have to wait.

WHAT WE’RE READING

  • A study by Wired and the Integrity Institute’s Jeff Allen found that when the messaging service Telegram “restricts” channels that feature right-wing extremism and other forms of radicalized hate, they don’t actually disappear — they just become harder to “discover” for those who don’t subscribe. Vittoria Elliott has the story for Wired.
  • In her weekly Substack newsletter, crypto critic and Berkman Klein Center fellow Molly White offered a thoughtful breakdown of Silicon Valley’s “effective altruism” and “effective accelerationism” camps, which she writes “only give a thin philosophical veneer to the industry’s same old impulses.”

From biometrics to surveillance — when people in power abuse technology, the rest of us suffer

More Coda Newsletters