NonZero Newsletter
Robert Wright's Nonzero
How to Not Lose Control of AI (Robert Wright & Max Tegmark)
Preview
0:00
-48:53

How to Not Lose Control of AI (Robert Wright & Max Tegmark)

This episode includes an Overtime segment that’s available to paid subscribers. If you’re not a paid subscriber, we of course encourage you to become one. If you are a paid subscriber, you can access the complete conversation either via the audio and video on this post or by setting up the paid-subscriber podcast feed, which includes all exclusive audio content. To set that up, simply grab the RSS feed from this page (by first clicking the “...” icon on the audio player) and paste it into your favorite podcast app.

If you have trouble completing the process, check out this super-simple how-to guide.

0:00 Why Max helped organize the 2023 “Pause AI” letter
6:48 AI as a species not a technology
12:08 The rate of AI progress since 2023
21:20 Loss of control is the biggest risk
32:18 How to get the NatSec community’s attention
35:48 How we could lose control
37:53 He we stay in control
47:06 Heading to Overtime

Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Max Tegmark (MIT, Future of Life Institute, Life 3.0). Recorded April 30, 2025.

Twitter: https://twitter.com/NonzeroPods

Overtime titles:

0:00 Alignment's inherent shortcomings
6:23 AI autonomy
10:57 Regulations and governance
16:16 The importance of “organic” dialogue
20:53 How afraid of China are leaders really?
25:33 JD Vance’s Paris IA Summit speech
28:01 Cosmic directionality

Overtime video:

This post is for paid subscribers