Listen now | 0:00 Army brats and media empires
1:28 Is war with Iran imminent?
4:12 Trump’s “red line” trap
10:00 Are we underestimating Iran?
18:28 Israel and the “clean break” memo
25:03 Rubio as deep state puppet master
31:25 Trump’s AI report card
35:16 The race for superintelligence
41:11 Is AI an existential threat?
52:52 Will AI trigger mass unemployment?
59:06 Chip restrictions as war starters
1:03:21 Does Xi think his Taiwan window is closing?
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True), Andrew Day (The American Conservative), and Curt Mills (The American Conservative). Recorded February 04, 2026.
Twitter: https://twitter.com/NonzeroPods
Because Curt mentioned it, I think what the two combative researchesses were trying to argue was *not* that chatbots aren't useful for researchers or journalists, but that most people aren't researchers or journalists- and therefore have no need for them in that context. (How AIs will affect human work more generally is a different matter, though v interesting.)
And tbh, I personally enjoyed how combative the two previous guests were- if Bob were less interested in proving his own point and more interested in investigating their perspective and then relating it to his own, that would have at least been instructive. (But Bob ain't like that lol)
Something that does annoy me about the AI discourse however is how people like Bob (and all the other crazy utilitarians who are obsessed with AI) use language.
Me I'm just a painter, part of the lowly banausoi. But it seems true that when you say something like, "I can't know if chat gpt is conscious", you're using the verb 'to know' in an especially odd way. You're using it in the same way that you would say, "I don't know that someone is in that house over there". That is, you could go over to that house, knock on the door, and if you really wanted to know if someone was in there, you could break in and look around. But, as Bob surely "knows", he can't have an experience for me any more than he can wear my pants for me.
Maybe think a little harder guys, it's kind of your job. (Although if all this ignorance is just a power move then that's what it is.)
Great discussion on chip restrictions as potnetial conflict triggers. The point about superintelligence race dynamics ties directly to the Taiwan straits tension in ways most coverage misses. When export controls become explicit technology denial aimed at slowing competitor AI development, it shifts from economic policy to something closer to containment strategy. We saw this play out differntly in the Cold War with tech transfer controls, but the timescale here is way compressed.
Because Curt mentioned it, I think what the two combative researchesses were trying to argue was *not* that chatbots aren't useful for researchers or journalists, but that most people aren't researchers or journalists- and therefore have no need for them in that context. (How AIs will affect human work more generally is a different matter, though v interesting.)
And tbh, I personally enjoyed how combative the two previous guests were- if Bob were less interested in proving his own point and more interested in investigating their perspective and then relating it to his own, that would have at least been instructive. (But Bob ain't like that lol)
Something that does annoy me about the AI discourse however is how people like Bob (and all the other crazy utilitarians who are obsessed with AI) use language.
Me I'm just a painter, part of the lowly banausoi. But it seems true that when you say something like, "I can't know if chat gpt is conscious", you're using the verb 'to know' in an especially odd way. You're using it in the same way that you would say, "I don't know that someone is in that house over there". That is, you could go over to that house, knock on the door, and if you really wanted to know if someone was in there, you could break in and look around. But, as Bob surely "knows", he can't have an experience for me any more than he can wear my pants for me.
Maybe think a little harder guys, it's kind of your job. (Although if all this ignorance is just a power move then that's what it is.)
Great discussion on chip restrictions as potnetial conflict triggers. The point about superintelligence race dynamics ties directly to the Taiwan straits tension in ways most coverage misses. When export controls become explicit technology denial aimed at slowing competitor AI development, it shifts from economic policy to something closer to containment strategy. We saw this play out differntly in the Cold War with tech transfer controls, but the timescale here is way compressed.