This episode includes an Overtime segment that’s available to paid subscribers. If you’re not a paid subscriber, we of course encourage you to become one. If you are a paid subscriber, you can access the complete conversation either via the audio and video on this post or by setting up the paid-subscriber podcast feed, which includes all exclusive audio content. To set that up, simply grab the RSS feed from this page (by first clicking the “...” icon on the audio player) and paste it into your favorite podcast app. If you have trouble completing the process, check out this super-simple how-to guide.
0:00 Teaser
1:07 Defining AI succession
3:51 How positive psychology led Dan to AI research
11:20 Is Dan a “successionist”?
17:40 Does it matter whether AI is sentient?
25:13 Two traits of a worthy successor
32:03 The varieties of succession scenarios
36:40 Values and interests
41:52 Will AI have any use for us?
44:33 Succession scenarios to avoid
49:42 Heading into Overtime
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Dan Faggella (Emerj AI Research, The Trajectory). Recorded January 7, 2026.
Twitter: https://twitter.com/NonzeroPods
Overtime titles:
The current pace of AI progress.
Near-term AI-related problems.
What makes values lasting?
Successionism and accelerationism: Compare and contrast.
Will it take a catastrophe to enlighten us?
How AI moguls rationalize their race to AGI.
Dan's cosmic p(doom).








