Why Altman (and AI) is under attack
First the New Yorker piece, then the Molotov cocktail...
Save the date: I'll be hosting a Zoom discussion for NonZero members next Saturday, April 25th, at noon US Eastern Time. The main topic will be AI, though some discussion of the Iran war (or, God willing, the war’s end) will probably be in order. A link to the Zoom call is at the bottom of this newsletter.
—Bob
The knives are out for OpenAI CEO Sam Altman, both figuratively and pretty close to literally. Last week—a few days after publication of a New Yorker piece featuring tons of anonymous attestations to Altman’s duplicitousness, and a few days before a Wall Street Journal piece about his conflicts of financial interest—an anti-AI extremist threw a Molotov cocktail at Altman’s house.
Altman suggested a causal link between the two kinds of attacks. In a blog post about the attempted firebombing, he called the New Yorker article “incendiary” and noted that it came in a time of “great anxiety about AI.” Maybe, he mused, the attack on his home was testament to “the power of words and narratives”.
A debate about the relationship between strong words and radical action was already happening in AI circles, owing to an armed attack a week earlier on the home of an Indianapolis politician who supported the construction of a local data center. The Altman attack turned the volume up a notch.
Nirit Weiss-Blatt, who writes an anti-AI-alarmism newsletter, assessed some Discord posts written by the accused firebomber and concluded that he had undergone “a long radicalization process shaped by ‘AI existential risk’ rhetoric.” OpenAI’s global policy chief, Chris Lehane, connected the Altman attack to the “very, very negative and dark view of humanity” held by “the Doomers.” Lehane said, “When you put some of those thoughts and ideas out there, they do have consequences.” Putting a finer point on it, White House AI adviser (and wealthy veteran of Silicon Valley) Sriram Krishnan highlighted the title of a book co-authored by Nate Soares and Doomer-in-chief Eliezer Yudkowsky. “This is the logical outcome of ‘If we build it everyone dies,’” Krishnan tweeted.
Here’s my radical take: If there’s a sense in which finger pointing is in order, some fingers should be pointed toward Silicon Valley—pointed at the culture of self-serving tech libertarianism that people like Krishnan represent. As exhibit A, I direct your attention to last week’s edition of the very influential podcast All In, whose cast of tech bro regulars includes White House AI Czar David Sacks and which is to Silicon Valley roughly what Pravda was to the Soviet Union (and which, for that matter, is to the Trump administration roughly what Pravda was to the Soviet Union).
But first a note about the senses in which finger pointing is and isn’t in order:
Obviously, none of us—not the New Yorker, not Krishnan, not Sacks, and, above all, not me—is morally responsible for the actions of people who are inspired or provoked by our words to do things we don’t espouse and didn’t mean to encourage. And if we’re held legally responsible for such things, that’s a threat to the foundations of liberal democracy.
That said, it is legitimate, and can be healthy, to argue that certain kinds of speech, though constitutionally protected, and perhaps morally defensible as well, do make certain kinds of reprehensible behavior more likely. Among other virtues, getting clearer on the causal connections between speech and bad behavior can help conscientious people refine their messaging to reduce its cost/benefit ratio. This clarity can also inform people’s judgments about which kinds of speech, though constitutionally protected, nonetheless deserve moral condemnation.
Now back to All In. Last week’s episode began with a discussion of Anthropic’s new but unreleased large language model Mythos, which I wrote about in last week’s Earthling. Mythos, apparently, has demonstrated an uncanny ability to find, and in many cases exploit, vulnerabilities in software; it has found holes in all major operating systems and browsers, among other platforms. So Anthropic is sharing Mythos with companies that maintain big parts of the digital infrastructure, like Microsoft and Apple and Cisco and Google, so they can find holes and patch them.
The All In discussion of Mythos began with guest Brad Gerstner, a Silicon Valley investor, who applauded Anthropic’s actions and used them as an object lesson in how little regulation AI needs. “What I like about this is they didn’t need government to hold their hand on this,” he said of the team at Anthropic. “We have plenty of government regulations.”
Now let’s think about this.
If what Anthropic says about Mythos is true, then we may have just dodged a very big bullet. This is an AI that, in the wrong hands, could do God knows what—take down power grids, vacuum up the life savings of retirees, induce a panic or a stock market crash by hijacking social media accounts en masse. And Gerstner seems to agree that the stakes are this high. If Mythos had been released, he said, it “would wreak havoc”—it would have “broken a lot of core things on the internet” and “allow offensive hacking.” Yet he’s happy to leave the decision about releasing such tools in the hands of whoever happens to be running whichever AI company happens to develop the next comparably dangerous tool—and the one after that and the one after that and…
And it’s not like envisioning an occasionally reckless CEO of less than sterling character takes a wild leap of imagination. That New Yorker piece quoted one acquaintance of Sam Altman calling him a “sociopath” and a former member of OpenAI’s board describing him as having an “almost a sociopathic lack of concern” for the consequences of his deceptions.
The source of Gerstner’s faith in the judgment and character of all present and future AI CEOs is a little unclear. At one point he seems to suggest that self-interest will lead them to do the right thing. What Anthropic did was in “the best long-term interest of the company,” he says. Yet he prefaces his remarks by saying Anthropic deserves “a ton of credit here” and ends them by saying that Anthropic CEO Dario Amodei and his team “deserve a lot of credit”—even though “credit” isn’t something we normally award for pursuing self-interest.
Maybe Gerstner’s point was that pursuing your long-term self-interest can be hard when you’re in a fiercely competitive AI race, and releasing a world-beatingly powerful model could generate a lot of revenue pronto and steal the thunder of your most bitter rival. If so, I agree. Which is exactly why I say we shouldn’t leave these things entirely in the hands of AI companies. Indeed, given the rumors that OpenAI will soon announce a Mythos-level model, I don’t feel wholly confident that either Amodei or Altman will in the coming months exercise as much self restraint as society deserves.
By the way, the investment company Gerstner runs, Altimeter Capital, has stakes in Anthropic and OpenAI worth billions of dollars. So it’s possible that his views on government regulation of the AI industry aren’t entirely free of bias.
All In host Jason Calakanis did disclose Gerstner’s stake in these companies, so it’s not like this was a covert influence op. But in a way that’s my point: I think one thing that drives some doomers crazy—figuratively and perhaps in the case of Altman’s assailant literally—is the perception that serious discussion about regulating AI is being thwarted by rich and powerful people who are serving their own interests at the expense of the public interest. These Silicon Valley potentates wield great influence via various levers of power, and they do so conspicuously and unabashedly.
For example, Greg Brockman, president of OpenAI, has donated $50 million to Leading the Future, a Super PAC that’s devoted to squashing various regulatory initiatives and kneecapping congressional candidates who support them. I assume he’s happy with his work so far; as of now there is zero meaningful federal regulation of AI. And, lest any states fill the void, Leading the Future backed a law that would have banned all AI regulation at the state level for 10 years.
I’m not saying Brockman bears any moral responsibility for the firebombing of Sam Altman’s house—any more than Eliezer Yudkowsky does. But if you’re speculating about the causal forces that came together in the firebomber’s presumably unbalanced mind to catalyze his crime, I don’t see any reason to give more weight to Yudkowsky’s arguments than to the perception that powerful people like Brockman are making it hard for such arguments to get a fair hearing via conventional political channels.
On this week’s episode of the New York Times tech podcast Hard Fork, co-host Casey Newton, in the course of discussing the attack on Altman’s house, said the following:
This AI moment that we’re living through is a top-down moment. It did not rise up from the grass roots from a bunch of nerds getting together in their garages and training frontier models. It was a small group of really smart people who were able to get access to massive amounts of capital from the elites in our society, and they’re now mounting this effort to build it very quickly, deploy it very quickly without a lot of guard rails. I think when the average person looks at this, they think, not only did I not ask for this but I have no meaningful control over it. And I think that’s a big reason you’re seeing people so furious—because I think, particularly on the left, this just looks like mostly a right wing elite project that’s being championed by President Trump and the many venture capitalists that are in his administration, and if you’re already worried it’s going to take your job, and you think you don’t have any control over it, well of course you’re going to hate it.
That sounds about right to me. People worried about the impact of AI look around and see no serious consideration of the possibility that this technology is moving too fast—at least, no consideration that has had any impact at the policy level. And what they do see—on the All In podcast and elsewhere—is rich, powerful people who, like Gerstner, make seemingly facile arguments yet carry the day. And often these arguments are aired in forums that, like the All In podcast and God knows how many pricey tech conferences, have the trappings of debate but lack the diversity of viewpoint that would make the debate meaningful.
A question that may have more relevance to all this than meets the eye: Why did we see, in the course of a couple of weeks, both the New Yorker’s massively documented piece about Sam Altman’s deceptive tendencies and the Wall Street Journal’s piece about his conflicts of financial interest? After all, the outlines of both stories have long been evident. Karen Hao’s book Empire of AI, for example, convincingly depicted Altman as a slippery character whose word isn’t exactly his bond.
Maybe part of the answer is that, as OpenAI has over the past half-year lost ground to Anthropic, and has seemed less and less sure footed, discontent with Altman’s performance as CEO has grown. And the less confident OpenAI investors and workers feel about the future value of their equity in the company, the more open they are to change in the CEO suite—so the more willing they are to share with journalists criticisms of the current CEO that they’d previously kept under wraps. Indeed, the Wall Street Journal piece reports that there is more and more talk, among OpenAI stakeholders, about replacing Altman.
This is just speculation on my part, and it could well be wrong. But, even so, the timing of this sudden proliferation of doubts about Altman’s character is consistent with a view held by some critics of the tech industry: that Silicon Valley is fine with CEOs who lack integrity so long as they’re good at making money. This is one of many unflattering perceptions that it may not be in Silicon Valley’s long-term interest to foster.

Banners and graphics by Clark McGillis.




