The Safety Lead at Anthropic AI Just Walked Out
When the guy writing the safety manual quits, pay attention.
Yesterday, I published a piece in which I sat down with Claude AI and interviewed the robot for my story about big education’s association with the Epstein files and AI. But I inadvertently left out an important part of the story.
On Monday, February 9th, Mrinank Sharma—the man who led AI safety research at Anthropic, the company that built Claude—resigned.
Sharma didn’t leave for another tech company. He didn’t get poached by a competitor. He quit to practice what he calls “courageous speech.” And he put his reasoning in writing.
“The world is in peril,” he wrote on X. “And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
That’s the head of safety research. At a company positioning itself as the responsible AI developer, saying the world is in peril, and walking away.
But here’s the line that stuck with me: “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote. He said employees “constantly face pressures to set aside what matters most.”
This wasn’t a quiet exit. This was a resignation letter with footnotes and poetry. This was someone saying out loud what a lot of people in that building probably think in private (Mrinank).
I spent the last few days asking Claude hard questions about who owns AI, who’s building it, and what happens when those two things intersect with three million pages of DOJ documents. And now the person whose job was to keep that system aligned with human values just said the pressures inside make it nearly impossible to do.
Sharma warned that humanity is approaching a threshold where its wisdom must grow as fast as its capacity to reshape the world. He’s not wrong. But he’s also not staying to help fix it.
When the safety lead walks out, that’s not a personnel change. That’s a warning sign.
And it came the same week I got a letter from a CEO defending his company’s photo contracts with American schools—without mentioning the private equity firm that owns it, or the name that keeps showing up in those Epstein files (see yesterday’s post).
None of this is coincidence. It’s all the same story. Who owns the tools? Who profits from the systems? And what happens when the people inside those companies can’t reconcile the gap between what they’re told to say and what they’re told to do?
Matt Shumer wrote that something big is happening. Mrinank Sharma just confirmed it—then left the building.
—Carol


I shared your first AI post with a professor friend in Rhode Island. She is going to share it with some of her students. She said "it so concisely explains what happens when a small minority of people have so much power over the lives of the rest of us."