New Yorker investigation into Sam Altman and the 2023 OpenAI board firing, with 100+ sources and internal documents
The New Yorker Just Asked the Question Nobody Inside OpenAI Would
Ronan Farrow and Andrew Marantz spent 18 months on a single story. They reviewed more than 200 pages of internal documents, including private memos from people who worked directly with Sam Altman, and interviewed over 100 sources. The result landed in The New Yorker this week, and it is the most detailed reconstruction yet of why the OpenAI board fired Altman in November 2023, and whether they were right to do it.
This is not gossip. This is not a disgruntled ex-employee airing grievances. Farrow and Marantz are methodical journalists who built a documented case. The central question they pose is simple and damning: was the board correct when they said Altman couldn’t be trusted?
I think that question deserves a serious answer. And the timing of this piece makes it impossible to ignore.
The Original Sin of OpenAI’s Structure
OpenAI was built on an unusual premise. The founders believed they might be building the most dangerous technology in human history. That belief was not rhetorical. It was supposed to be structural. The nonprofit board had oversight authority precisely because the mission was meant to stay above commercial pressure.
That structure is now being dismantled. OpenAI is converting to a for-profit entity and raising capital at a reported $300 billion valuation. The safety rationale that justified the original design, the reason a nonprofit board had the power to fire the CEO in the first place, is being traded for growth capital.
So the question of what actually happened in November 2023 matters more now than it did then. If the board had legitimate concerns about Altman’s honesty, and those concerns were well-documented, then what does it mean that he not only returned but is now steering a full conversion to for-profit operation?
What the Documents Apparently Show
Farrow’s thread on the story notes that the reporting includes never-before-disclosed internal memos and extensive private notes from a close colleague of Altman’s. The board’s stated reason for the firing was that Altman had not been “consistently candid” with them. That phrasing was careful and vague at the time. The New Yorker piece appears to give it specific content.
I haven’t seen every document, but the shape of the reporting is clear: this is a pattern-of-behavior story, not a single-incident story. That distinction matters enormously. A one-time miscommunication gets managed. A documented pattern of candor failures in the CEO of a company building frontier AI is a governance catastrophe.
The Reinstatement Was Always the Stranger Part
Most of the public narrative after November 2023 focused on the board’s apparent incompetence. They fired the CEO, couldn’t hold the coalition together, and reversed course within five days. Altman came back. The board members who voted to fire him were gone.
That story, the fumbling board versus the competent CEO, was the version that stuck. But it was always the wrong frame. The question was never whether the board executed the firing cleanly. The question was whether their underlying concern was valid. A board can be both right about the problem and terrible at handling it.
The New Yorker is making the case that they were right about the problem.
Why This Moment Is Different
Other critical coverage of Altman has existed. What makes this piece different is the documentation and the source count. More than 100 people on record or background. Two hundred pages of internal materials. Eighteen months of reporting. That is not a narrative built on vibes or competitive animus. That is an evidentiary case.
And it arrives at the exact moment OpenAI is asking the world to trust it with more capital, more infrastructure, and more access to critical systems than any AI company has ever had. The $300 billion valuation is not abstract. That money buys compute, talent, political influence, and the ability to shape what frontier AI looks like for the next decade.
If the person at the top of that structure has a documented honesty problem with his own board, that is not a PR issue. That is a structural risk.
What Happens Now
The honest answer is probably nothing changes immediately. Altman has survived every prior wave of criticism. The capital is still flowing. The for-profit conversion is still proceeding.
But journalism like this has a longer half-life than a news cycle. The 200 pages of documents exist. The 100-plus sources exist. And regulators in the EU, the UK, and eventually the US are paying closer attention to AI governance than they were two years ago.
The New Yorker piece won’t stop the fundraising round. It might, over time, make it harder to argue that OpenAI’s structure and leadership deserve the kind of public trust that was supposedly baked into the original nonprofit design.
That original design is gone now. The least we can do is be clear about why.
Sources & Further Reading
#OpenAI #SamAltman #AIGovernance #ArtificialIntelligence #TechPolicy
