NonZero Newsletter
Robert Wright's Nonzero
Two Visions of AI Apocalypse (Robert Wright & David Krueger)
Preview
0:00
-57:20

Two Visions of AI Apocalypse (Robert Wright & David Krueger)

This episode includes an Overtime segment that’s available to paid subscribers. If you’re not a paid subscriber, we of course encourage you to become one. If you are a paid subscriber, you can access the complete conversation either via the audio and video on this post or by setting up the paid-subscriber podcast feed, which includes all exclusive audio content. To set that up, simply grab the RSS feed from this page (by first clicking the “...” icon on the audio player) and paste it into your favorite podcast app.

If you have trouble completing the process, check out this super-simple how-to guide.

0:00 How Eliezer Yudkowsky’s new book envisions AI takeover
8:24 Will we ever really know how AI works?
15:10 The “paperclip maximizer” problem revisited
26:13 Will advanced AIs be insatiable?
31:39 David’s alternative takeover scenario: gradual disempowerment
43:31 Can—and should—we keep humans in the loop?
51:46 Heading to Overtime

Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and David Krueger (The University of Montréal). Recorded September 24, 2025.

Twitter: https://twitter.com/NonzeroPods

Overtime titles:

AI accelerationists: true believers or trolls?
How David became an AI safety early adopter.
AI safety’s international cooperation gap.
“Mutually Assured AI Malfunction” and WWIII.
David: Superintelligence is (pretty) near.
What’s the deal with AI “situational awareness?”
What’s the deal with Leopold Aschenbrenner?

Overtime video:

This post is for paid subscribers