Listen now (71 mins) | 0:00 Teaser
0:59 Alex and Emily’s book, The AI Con
5:05 Beyond the AI booster-doomer spectrum
10:27 What—and who—is AI actually useful for?
19:54 The AI productivity question(s)
30:48 Emily: AI won’t take your job, it’ll just make it shi**ier
39:50 Stochastic parrots, Chinese rooms, and semantics
46:04 Are LLMs really black boxes?
56:33 Do AIs “understand” things?
59:49 Debating using AIs as experts
1:08:15 How “neural” are neural nets, really?
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True), Alex Hanna (The AI Con, Distributed AI Research Institute) and Emily Bender (The AI Con, Computational Linguistics Laboratory). Recorded January 20, 2026.
Twitter: https://twitter.com/NonzeroPods
Bob: Today we have the authors of "Cars are useless", who have made some interesting counter points to the hype that's been surrounding cars for the last hundred or so years. Can you summarize the main points of your book?
Alex: Cars are useless, they actually make people slower, they kill people, and they are bad for the environment. We just do not believe they pose any value whatsoever to society.
Bob: I work as a taxi driver, and -
Emily: Before we begin I would like you to acknowledge that I am not a car.
Bob: Uhm ok, yes, I acknowledge that. Anyway, as I was saying I've been using cars for the last few years and I do find them pretty helpful for getting around.
Emily: Well, you're wrong - it's ROADS that make you faster, and those have been around for thousands of years. I suspect if you are using cars as your main way of driving people around - you are probably a very bad taxi driver.
Bob: Do you ever use cars?
Emily: I refuse to get a drivers license, and I do everything I can to avoid looking at cars, because as I mentioned - they are bad along every conceivable dimension.
Alex: Sometimes I'll check in periodically to see if they've gotten better, but they are still completely incapable of hopping over even the smallest fences.
Bob: Don't you think -
Emily: I have to go now. I have to get to an appointment on the other side of town tomorrow.
Clever - as long as you ignore the substance of anything they said.
Bob: Today we have Peter O'Connor, author of "Cars: The Magical Solution to Everything," who's been hyping cars as the ultimate game-changer for society. Can you summarize your main points?
Peter: Cars are incredible. They make people faster, connect the world, and are basically sentient beings on wheels. Without cars, we'd still be in the Stone Age. They're going to achieve full autonomy any day now and solve all our problems.
Bob: I work as a pedestrian advocate, and—
Peter: Before we begin, I would like you to acknowledge that cars are basically people. They have feelings and understand the road.
Bob: Uhm, okay, but cars aren't actually alive or understanding anything. They just follow mechanical patterns and burn fuel. Anyway, as I was saying, I've seen cars cause massive pollution, thousands of deaths a year, and gridlock that actually slows everyone down.
Peter: Well, you're wrong. It's the DRIVERS who are the problem, not the cars. Cars are pure innovation. If you're not using cars for everything, you're probably just a Luddite afraid of progress.
Bob: Do you ever consider the downsides, like environmental destruction or exploitation in mining for parts?
Peter: I refuse to look at any data that doesn't hype cars. I do everything I can to promote them because they're good along every conceivable dimension. Scale them up, add more lanes, and they'll become superintelligent!
Bob: But don't cars still crash into fences and fail at basic tasks without constant human intervention?
Peter: Sometimes I'll check in on critics, but they're still completely incapable of appreciating how cars will hop over every obstacle with enough investment.
Bob: Don't you think...
Peter: I have to go now. My self-driving car is about to revolutionize my commute across town.
This is just tu quoque / whataboutery -- all you are doing here is saying "yeah but what about the AI hypers"? And even in your attempt, you poke fun at "car intelligence" as if that absurdity somehow translates to LLM intelligence...
This kind of "which side are you on" mentality is part of what people are criticising Bender & Hanna for.
Arguably the worst quality conversation I’ve heard on this podcast. Bob does an extraordinary job interviewing people but these two guests were not up to the task of having a real conversation, which is ironic given their self images as being language people. It was frustrating to listen to them failing to do the listening part of having a conversation.
As a software engineer with two CS degrees who has also built these models, I couldn't agree more with Alex and Emily but gosh was Emily rude. I too have built these models. I use them sparingly to write Rust backend code (they suck dealing with complex systems) and heavily to write front-end React on a daily basis. They are useful but they are not intelligent or sentient in any way. It's fancy semantic search in vector space. I wish more people would know that and appreciate that the tech oligarchs are using the specter of AI to make peoples' lives worse, but decorum goes a long way. I feel so sad knowing that most non-technical people who listen to this interview are not going to see past the rudeness.
Fancy semantic search in vector space is not true thou ?
Like LLM are arbitrary programs optimized by gradient descent.
They have lots of complicated internal mechanisms people are working on reverse engeniering to do things like plan what the last word of a line in a poem need to be for rimig , do arithmetic (see the paper "Language Models Use Trigonometry to Do Addition") and all kinds of things.
They don't in fact just contain a database of dataset examples to do semantic search on or anything like that and obiously that wouldn't be as good at writting code?.
And clearly semantic search would also not be trivial and require complicated mechanisms for the "semantic" part?
And If you wanted to write your code with a specific variable name It wouldn't because It woudln't be able to find a dataset example with that exact code with that exact name.
(And no you can't just add "fancy" and cover all the ways its obiously not just a semantic search over a database with that I've seen people say that but that seems to me like if you were saying the earth is a fancy flat shape and if people point to the curvature you say its not actually just flat , but its esentially flat).
(Btw side note I'm confused and sligtly curious by what you mean by 2 CS degrees thou don't care much)
I apologize I'm being inprecise for the purpose of being understood by non technical people. What I'm mainly trying to get at is that fundamentally these LLMs are stochastic operations that leverage elements that have been known to exist in language since the 50s like Emily said. There is no real reasoning going on here. That's why that models completely fall apart when it comes to high level longitudinal tasks like producing good research.
I have my bachelors in computer science from Stanford and masters also in computer science from Brown. I don't mean to brag but I feel the need to separate my statements from all the loud yahoos and grifters in this craze.
LLM are Neural Nets witch can represent any function thou?. You can't tell whether they are reasoning or not from what basic operations they are doing because the basic operations are Turing complete(in the limit of infinite width or infinite context fe). So duno what you even mean by "stochastic operations that leverage elements that have been known to exist in language since the 50". The operations are not even "stochastic"except for the part where they output probabilities and there's sampling from that, a forward pass in a transformer is just in paper deterministic as in if you take the same input you always get the same probability distribution(not completely in practice but for dumb implementation reasons).
Neural nets are old but obviously people are building bigger ones with new methods today and nothing about them being known before tells us much about what the capabilities are today or could be on the future?.
Especially since , again this is a important point , you can in fact represent any program with them, there's interesting structure there that's different in gpt4 than in gpt2 or in an image generator or alphaphold even thou they are all Neural Nets and the question of whether any "reasoning" is happening there depends on exactly whats represented on the weights.
At best you can guess based on things like seeing them fail at things and say real reasoning would have succeed at those tasks(thou I feel people often require things here that disqualify most humans from reasoning), or a theory of what its possible to learn from certain forms of training but this will get into hugely controversial points in the field.
I feel a bit annoyed cause I see a lot of people that start posts with saying they have credentials and then are wrong or saying something actually controversial on the field to the point its a red flag for me that someone starts a post with "As an X" and this seems like its an example of that.
Like I also have a bachelors and masters on CS and a paper published on the field but I don't feel the need to start post like "As someone with 2 cs degrees " or like "As an AI researcher" and If I did I would want to make pretty sure of the thing I'm saying and not say something other people with similar credentials or more (I mean see the stuff Hinton says) disagree with?. Cause otherwise it becomes meaningless and misleads people into thinking its obvious if you have those credentials and people who say otherwise can't know what they are talking about.
you expect non-technical people to know what semantic search in vector space is?
e. tbh you remind me a lot of the guests; dishonest face-saving after someone points out you’re wrong followed by dramatic unsupported gestures and appeals to authority. lol
These two authors are incredibly rude, unnecessarily combative and defensive. If they are hoping to persuade people of their argument, they are getting in their own way.
https://youtu.be/MwfSCCo6jXs?si=iIZIgVxZVV5M9OqQ&t=3636 Yep, I'm sure if you're "an Arab-Coptic trans woman", the technology to produce a summary of most general-knowledge questions on demand that's at least as good as a typical high school textbook is of no value.
I love a good, disagreeable interview, but this one was a total fail for me: the guests didn't do a good job of providing a solid thing to replace the discourse they reject. They probably have it but don't know how to present it in a useful way in a conversation without a book-length context or something?
Listening to these guests has me thinking that backlash to self-righteous, rude, and "woke" people like them is a big reason Trump is in the White House. That said, I agree with many of their perspectives on LLM-based AI.
I think Trump is in office because Kamala didn't give two shits about the genocide/working class people. I think that's hardly woke. In fact, it's just cruelty.
He lied indeed. However, he also secured something resembling a "ceasefire" -- something Biden never attempted. I'd rather have Kamala in office, but the Democratic Party is pathetic, even though I will continue to vote for them.
Putting aside that I would never want to have a conversation with these two people, for fear that they would take any particular joke I make or idea that I have as a direct attack on them...
I find it frustrating that how many peoples' viewpoints arguing against AI are so often completely caught up in it's not being useful (or, I guess, semantics about what "useful" even means). It's completely deluded. I understand that you feel it's disgusting, but to prematurely decide they're dumb and ignore the rate of progress is to completely miss the larger problem. These will bring with them tremendous changes for our society: the wealth distribution, personalized propaganda, surveillance, slaughter-bots and engineered pandemics. What happens when the marginal value of the human as a means of production decreases to such a degree that labor loses its bargaining power? But even more immediately, for the individual: While I find using AI for my job incredibly useful (programming, but also research), I also see how it builds a "cognitive debt." This could be so bad for our brains.
I feel embarrassed when the "anti-AI" viewpoint doesn't even take the time to understand how quickly the capabilities are increasing. I'm sorry, but I'm not going to pay much attention to you telling me that you published a paper on how they suck five years ago. If you haven't used them in five years, you don't know how powerful they've become. These guests aren't making contact with the real issues here.
This was a great podcast, I loved the mildly heated discussion and I loved hearing the contrarian position. I don't agree with them, but it was fascinating. Also, what's the deal with Emily Bender not engaging with anyone who doesn't posit that she has a human experience? It just seems nutty needing such reassurance.
Wow. In some ways a tough listen but an important node in the variety of AI perspectives. This skeptical / antagonistic view is more common than AI podcasts listeners might realize. It's a fundamentally moral cry from the AI Ethics tradition of critically examining the bias, exploitation and alienation in technological automation, and I appreciate hearing it.
Quick thoughts:
- Lots of people are scared about the world in so many ways right now! The disruptive quality of AI—whether in reality or merely as a new source of fear and uncertainty—compounds what many feel is a failing society: failing economy, fragmenting politics, environmental instability. AI talk (as espoused by insiders, boomers/doomers alike) can sound like homilies from millinerian prophets. AI as actually experienced now is mostly a sort of alienating slop that feel increasingly ubiquitous (especially video and images). Bender & Hanna are trying to dispel this and ground us back in the real: other human beings and the society we (not the architects of capital) want.
- There is this scientific / philosophical question that is most interesting to me and hangs over this conversation: what, if anything, did the phenomenal success of scaling show to be true about the nature of language and/or intelligence (especially in the GPT3 transition moment)? Bender is right to point to the skepticism of computational linguists towards the very idea that these systems could bear "meaning". This is a philosophical point but also a scientific one: we don't know what it is the brain does that is distinct from these LLM architectures (as Bob points out), and we certainly don't have consensus about the philosophical limits of language! It seems to me like the counter-intuitive properties of high dimensional spaces enable LLMs to perform feats that I imagine Bender (following Chomsky) would have said were impossible through the technique of next-token transformer models alone. Sufficient data and training demonstrated some real inflection point where new emergent structures appear in the network that allow it to do...? The quest to understand what semantic structures exist in LLMs (or multimodal systems, perhaps more interestingly) is such a juicy scientific project with so many implications. The further and most important question, then, is whether fundamentally "new" semantics can possibly emerge from these systems: new meaning, new forms of knowledge, new art, new science. The combined human-society-technology system, the big meta system, is doing something though!
- If even 2026 LLM's don't meet Bender's threshold for "understanding" or personification, which I think is fair politically if not scientifically, it seems clear that we've yet to see the end of this wave of machine-learning breakthroughs. Systems that include LLMs and increasingly multimodal data are becoming embodied, either in agents or in robotics, a field also going through revolutionary breakthroughs. If these holistic systems begin to pass the test for skeptical linguists like Bender I think we'll have to admit that the LLMs of the early 2020s represented a meaningful scientific breakthrough. It seems, however, that the leading industry scientists are now admitting, though, that scaling alone will not push the LLMs where they had hoped in 2022—but of course they are far past simple scaling in their labs with complex new architectures and techniques. No matter how you shake it, though, contra Bender, this AI summer has been a hot one.
- The ghoulish specter of social disruption being led by a Trumpified Silicon Valley is terrifying to me. The actually existing manifestation of AI as a series customer service nightmares, deepfake videos, erotic solicitations, and frustrating search results sets a scary tone. I am much more aligned with Bender and Hanna on their political revulsion to this moment and the unexamined propagandistic way that AI has slipped into our daily discourse. Scientifically / philosophically I think I am more aligned with Bob! I look forward to his book as a way to think through some of these foundational questions.
I see in the substack comments, some people are inclined to mock and be repelled by the discourse style of Hanna and Bender. But I appreciate their contentiousness. It's a breath of fresh air compared to what I usually hear on podcasts where hosts and guests talk for hours about how much they agree with each other, and how wonderful and insightful they each are.
I get the reasons why people were put off by Bender and Hanna - their focus on semantics can be annoying or seem come off as pedantic. But I think they're making arguments that are important to interrogate. Here's a related blog post and comment thread at Andrew Gelman's blog:
I'm a bit more skeptic than Bob is on the promise/dangers of AI, but this interview is almost flipping my allegiances. Really head-in-the-sand stuff here. There's just no way that this technology isn't in some sense a time-saving -- I've definitely seen direct examples of that. There's no need for these two to be so caustic. They should take the time to find and interview people who are using it successfully, and add that to their understanding of the technology.
My critique was always that it's important not to buy the hype and just assume they will change everything forever. But it is going to have uses.
I work in elearning. We create online courses for big corporates. Many of my junior colleagues have lost their jobs because AI can write course copy significantly faster, and in some cases, significantly better than they can, at no significant cost to the company. Hence, those colleagues have lost their jobs. So there's a concrete example for you.
I question the ability of ai to write course copy better than professionals, unless the professionals totally sucked at their jobs. There is a lot of incompetence in the workplace and covid exposed a bunch of people who don't contribute in a meaningful way. Probably 40% of "knowledge workers" could be sacked with no noticeable change in institutional effectiveness. But this isn't AI replacement of workers.
If they suck at their jobs, then they aren't workers... they are employees. They get paychecks, eat up resources, go to meeting, complain to HR, they are completely incompetent, can spend hours telling you why they can't do something that should take 5 minutes and they spend all of their time gossiping and bad mouthing the company. Firing a bad worker doesnt mean replacing them, it is addition by subtraction. Blame it on AI or sun spots or global warming.
Sorry if that is aggressive. I don’t know if has been proven to enhance productivity from an economic standpoint. In fact I don’t think there is much evidence that it does. There is some StLFed research but it’s pretty thin. We all have the same questions… I didnt mean to e a jerk about it because I dont know the answer either. In fact that is what annoys me about this topic, not people like you who want to learn. Again, sorry.
Don’t worry about it. I’m quite skeptical of a lot of the big talk about AI.
I just think this is a new enough tech where sometimes anecdotal evidence of use is not irrelevant. It’s saved me time at work today, pulling some stats. There’s no need for a serious critic to just double down and claim it will have 0 use.
Something that frustrates Bender, and me for that matter is that the tech is not new. I’ve been working on it for decades. It isn’t SPSS or R… it cannot perform descriptive statistics or regression analysis… but it will ingest data sets you feed it and provide analytic language that is plausible but complete garbage. You know that but many do not. It is infecting institutional workflows.
I honestly can’t speak to that — i’m not in tech! I had it doing mind-numbingly simple tasks for the non profit school where I work.
The technology at its heart may not be new but the chat bot interface allows fools like me to do lots of basic stuff we might not have otherwise. I’m sure this is happening all over.
good lord could this have been any worse? I feel like Bob was caught off guard but their hostility and the whole thing went off the rails. I wish I was a good enough writer to express how insufferable I found these two to be.
I had to turn this off after 10 minutes. Came here to see if other people found Bender to be unbearably rude, obnoxious, and self righteous. Nice to know I'm not alone.
It's unfortunate that Emily's combativeness overshadowed her message and tended to derail the exchange in some respects. Still, it is an interesting dialogue.
My God. Haven't finished the conversation yet but couldn’t agree more. I wasn't sure if I was misinterpreting the very intense seeming combativeness and offense taking. Very strange but Bob rolls with it and it's definitely an interesting perspective to take in even if the messengers are more than a little cringe inducing.
Full transcript:
Bob: Today we have the authors of "Cars are useless", who have made some interesting counter points to the hype that's been surrounding cars for the last hundred or so years. Can you summarize the main points of your book?
Alex: Cars are useless, they actually make people slower, they kill people, and they are bad for the environment. We just do not believe they pose any value whatsoever to society.
Bob: I work as a taxi driver, and -
Emily: Before we begin I would like you to acknowledge that I am not a car.
Bob: Uhm ok, yes, I acknowledge that. Anyway, as I was saying I've been using cars for the last few years and I do find them pretty helpful for getting around.
Emily: Well, you're wrong - it's ROADS that make you faster, and those have been around for thousands of years. I suspect if you are using cars as your main way of driving people around - you are probably a very bad taxi driver.
Bob: Do you ever use cars?
Emily: I refuse to get a drivers license, and I do everything I can to avoid looking at cars, because as I mentioned - they are bad along every conceivable dimension.
Alex: Sometimes I'll check in periodically to see if they've gotten better, but they are still completely incapable of hopping over even the smallest fences.
Bob: Don't you think -
Emily: I have to go now. I have to get to an appointment on the other side of town tomorrow.
Brilliant. This captures the exact quality of the “conversation.”
Clever - as long as you ignore the substance of anything they said.
Bob: Today we have Peter O'Connor, author of "Cars: The Magical Solution to Everything," who's been hyping cars as the ultimate game-changer for society. Can you summarize your main points?
Peter: Cars are incredible. They make people faster, connect the world, and are basically sentient beings on wheels. Without cars, we'd still be in the Stone Age. They're going to achieve full autonomy any day now and solve all our problems.
Bob: I work as a pedestrian advocate, and—
Peter: Before we begin, I would like you to acknowledge that cars are basically people. They have feelings and understand the road.
Bob: Uhm, okay, but cars aren't actually alive or understanding anything. They just follow mechanical patterns and burn fuel. Anyway, as I was saying, I've seen cars cause massive pollution, thousands of deaths a year, and gridlock that actually slows everyone down.
Peter: Well, you're wrong. It's the DRIVERS who are the problem, not the cars. Cars are pure innovation. If you're not using cars for everything, you're probably just a Luddite afraid of progress.
Bob: Do you ever consider the downsides, like environmental destruction or exploitation in mining for parts?
Peter: I refuse to look at any data that doesn't hype cars. I do everything I can to promote them because they're good along every conceivable dimension. Scale them up, add more lanes, and they'll become superintelligent!
Bob: But don't cars still crash into fences and fail at basic tasks without constant human intervention?
Peter: Sometimes I'll check in on critics, but they're still completely incapable of appreciating how cars will hop over every obstacle with enough investment.
Bob: Don't you think...
Peter: I have to go now. My self-driving car is about to revolutionize my commute across town.
This is just tu quoque / whataboutery -- all you are doing here is saying "yeah but what about the AI hypers"? And even in your attempt, you poke fun at "car intelligence" as if that absurdity somehow translates to LLM intelligence...
This kind of "which side are you on" mentality is part of what people are criticising Bender & Hanna for.
I'm pointing out the ease of argument ad absurdum.
I'm pointing out that it works when Peter does it but not when you do it.
Lol. OK.
Hilarious! I would give you a lot of credit but you basically transcribed the actual conversation.
Eh. These guests were no more shrill than that Chinahawk Jordan Schneider. Bob actually complained about that obtuse fella a few podcasts later LOL
I come away from this conversation absolutely convinced of two things:
1. Emily Bender and Alex Hanna are jerks.
2. Hearing them talk is unpleasant.
Beyond that man, I don’t know…
I would like to reply to your comment. But before I do, please posit my humanity as an axiom in this conversation.
Arguably the worst quality conversation I’ve heard on this podcast. Bob does an extraordinary job interviewing people but these two guests were not up to the task of having a real conversation, which is ironic given their self images as being language people. It was frustrating to listen to them failing to do the listening part of having a conversation.
As a software engineer with two CS degrees who has also built these models, I couldn't agree more with Alex and Emily but gosh was Emily rude. I too have built these models. I use them sparingly to write Rust backend code (they suck dealing with complex systems) and heavily to write front-end React on a daily basis. They are useful but they are not intelligent or sentient in any way. It's fancy semantic search in vector space. I wish more people would know that and appreciate that the tech oligarchs are using the specter of AI to make peoples' lives worse, but decorum goes a long way. I feel so sad knowing that most non-technical people who listen to this interview are not going to see past the rudeness.
Fancy semantic search in vector space is not true thou ?
Like LLM are arbitrary programs optimized by gradient descent.
They have lots of complicated internal mechanisms people are working on reverse engeniering to do things like plan what the last word of a line in a poem need to be for rimig , do arithmetic (see the paper "Language Models Use Trigonometry to Do Addition") and all kinds of things.
They don't in fact just contain a database of dataset examples to do semantic search on or anything like that and obiously that wouldn't be as good at writting code?.
And clearly semantic search would also not be trivial and require complicated mechanisms for the "semantic" part?
And If you wanted to write your code with a specific variable name It wouldn't because It woudln't be able to find a dataset example with that exact code with that exact name.
(And no you can't just add "fancy" and cover all the ways its obiously not just a semantic search over a database with that I've seen people say that but that seems to me like if you were saying the earth is a fancy flat shape and if people point to the curvature you say its not actually just flat , but its esentially flat).
(Btw side note I'm confused and sligtly curious by what you mean by 2 CS degrees thou don't care much)
I apologize I'm being inprecise for the purpose of being understood by non technical people. What I'm mainly trying to get at is that fundamentally these LLMs are stochastic operations that leverage elements that have been known to exist in language since the 50s like Emily said. There is no real reasoning going on here. That's why that models completely fall apart when it comes to high level longitudinal tasks like producing good research.
I have my bachelors in computer science from Stanford and masters also in computer science from Brown. I don't mean to brag but I feel the need to separate my statements from all the loud yahoos and grifters in this craze.
LLM are Neural Nets witch can represent any function thou?. You can't tell whether they are reasoning or not from what basic operations they are doing because the basic operations are Turing complete(in the limit of infinite width or infinite context fe). So duno what you even mean by "stochastic operations that leverage elements that have been known to exist in language since the 50". The operations are not even "stochastic"except for the part where they output probabilities and there's sampling from that, a forward pass in a transformer is just in paper deterministic as in if you take the same input you always get the same probability distribution(not completely in practice but for dumb implementation reasons).
Neural nets are old but obviously people are building bigger ones with new methods today and nothing about them being known before tells us much about what the capabilities are today or could be on the future?.
Especially since , again this is a important point , you can in fact represent any program with them, there's interesting structure there that's different in gpt4 than in gpt2 or in an image generator or alphaphold even thou they are all Neural Nets and the question of whether any "reasoning" is happening there depends on exactly whats represented on the weights.
At best you can guess based on things like seeing them fail at things and say real reasoning would have succeed at those tasks(thou I feel people often require things here that disqualify most humans from reasoning), or a theory of what its possible to learn from certain forms of training but this will get into hugely controversial points in the field.
I feel a bit annoyed cause I see a lot of people that start posts with saying they have credentials and then are wrong or saying something actually controversial on the field to the point its a red flag for me that someone starts a post with "As an X" and this seems like its an example of that.
Like I also have a bachelors and masters on CS and a paper published on the field but I don't feel the need to start post like "As someone with 2 cs degrees " or like "As an AI researcher" and If I did I would want to make pretty sure of the thing I'm saying and not say something other people with similar credentials or more (I mean see the stuff Hinton says) disagree with?. Cause otherwise it becomes meaningless and misleads people into thinking its obvious if you have those credentials and people who say otherwise can't know what they are talking about.
you expect non-technical people to know what semantic search in vector space is?
e. tbh you remind me a lot of the guests; dishonest face-saving after someone points out you’re wrong followed by dramatic unsupported gestures and appeals to authority. lol
Unnecessarily rude which almost undermines their argument. I don’t disagree with them but boy are they unpleasant to listen to.
These two authors are incredibly rude, unnecessarily combative and defensive. If they are hoping to persuade people of their argument, they are getting in their own way.
https://youtu.be/MwfSCCo6jXs?si=iIZIgVxZVV5M9OqQ&t=3636 Yep, I'm sure if you're "an Arab-Coptic trans woman", the technology to produce a summary of most general-knowledge questions on demand that's at least as good as a typical high school textbook is of no value.
I can't believe these people are real.
👏 Yeah, the bit really pushed it over the edge.
Wait, why is that sentence so upsetting? I'm having to stop myself from typing the word "snowflake".
I'm 25 mins in and I appreciate how disagreeable the interview is. I think I've been coddled by too many fluff interviews in the past.
edit: They interpreted 95% of what you said in the worst light possible. That's frustrating! Bad faith!
I love a good, disagreeable interview, but this one was a total fail for me: the guests didn't do a good job of providing a solid thing to replace the discourse they reject. They probably have it but don't know how to present it in a useful way in a conversation without a book-length context or something?
Also enjoyed the disagreeableness, I interpreted their push back as raising objections to things many people "accept", rather than bad faith.
Dr. Bender conflated a comparison between her idea and Searle's idea with a personal comparison between herself and Searle.
Listening to these guests has me thinking that backlash to self-righteous, rude, and "woke" people like them is a big reason Trump is in the White House. That said, I agree with many of their perspectives on LLM-based AI.
Yes. They completely embody the smug, condescending image of academics that Maga has.
I think Trump is in office because Kamala didn't give two shits about the genocide/working class people. I think that's hardly woke. In fact, it's just cruelty.
And Trump does?
No, but he assumed the posture. Kamala didn't even attempt.
So, "assumed the posture" means he lied. Just another day in the darkness of Trumpistan.
He lied indeed. However, he also secured something resembling a "ceasefire" -- something Biden never attempted. I'd rather have Kamala in office, but the Democratic Party is pathetic, even though I will continue to vote for them.
They are feckless. It's discouraging.
Putting aside that I would never want to have a conversation with these two people, for fear that they would take any particular joke I make or idea that I have as a direct attack on them...
I find it frustrating that how many peoples' viewpoints arguing against AI are so often completely caught up in it's not being useful (or, I guess, semantics about what "useful" even means). It's completely deluded. I understand that you feel it's disgusting, but to prematurely decide they're dumb and ignore the rate of progress is to completely miss the larger problem. These will bring with them tremendous changes for our society: the wealth distribution, personalized propaganda, surveillance, slaughter-bots and engineered pandemics. What happens when the marginal value of the human as a means of production decreases to such a degree that labor loses its bargaining power? But even more immediately, for the individual: While I find using AI for my job incredibly useful (programming, but also research), I also see how it builds a "cognitive debt." This could be so bad for our brains.
I feel embarrassed when the "anti-AI" viewpoint doesn't even take the time to understand how quickly the capabilities are increasing. I'm sorry, but I'm not going to pay much attention to you telling me that you published a paper on how they suck five years ago. If you haven't used them in five years, you don't know how powerful they've become. These guests aren't making contact with the real issues here.
This was a great podcast, I loved the mildly heated discussion and I loved hearing the contrarian position. I don't agree with them, but it was fascinating. Also, what's the deal with Emily Bender not engaging with anyone who doesn't posit that she has a human experience? It just seems nutty needing such reassurance.
Wow. In some ways a tough listen but an important node in the variety of AI perspectives. This skeptical / antagonistic view is more common than AI podcasts listeners might realize. It's a fundamentally moral cry from the AI Ethics tradition of critically examining the bias, exploitation and alienation in technological automation, and I appreciate hearing it.
Quick thoughts:
- Lots of people are scared about the world in so many ways right now! The disruptive quality of AI—whether in reality or merely as a new source of fear and uncertainty—compounds what many feel is a failing society: failing economy, fragmenting politics, environmental instability. AI talk (as espoused by insiders, boomers/doomers alike) can sound like homilies from millinerian prophets. AI as actually experienced now is mostly a sort of alienating slop that feel increasingly ubiquitous (especially video and images). Bender & Hanna are trying to dispel this and ground us back in the real: other human beings and the society we (not the architects of capital) want.
- There is this scientific / philosophical question that is most interesting to me and hangs over this conversation: what, if anything, did the phenomenal success of scaling show to be true about the nature of language and/or intelligence (especially in the GPT3 transition moment)? Bender is right to point to the skepticism of computational linguists towards the very idea that these systems could bear "meaning". This is a philosophical point but also a scientific one: we don't know what it is the brain does that is distinct from these LLM architectures (as Bob points out), and we certainly don't have consensus about the philosophical limits of language! It seems to me like the counter-intuitive properties of high dimensional spaces enable LLMs to perform feats that I imagine Bender (following Chomsky) would have said were impossible through the technique of next-token transformer models alone. Sufficient data and training demonstrated some real inflection point where new emergent structures appear in the network that allow it to do...? The quest to understand what semantic structures exist in LLMs (or multimodal systems, perhaps more interestingly) is such a juicy scientific project with so many implications. The further and most important question, then, is whether fundamentally "new" semantics can possibly emerge from these systems: new meaning, new forms of knowledge, new art, new science. The combined human-society-technology system, the big meta system, is doing something though!
- If even 2026 LLM's don't meet Bender's threshold for "understanding" or personification, which I think is fair politically if not scientifically, it seems clear that we've yet to see the end of this wave of machine-learning breakthroughs. Systems that include LLMs and increasingly multimodal data are becoming embodied, either in agents or in robotics, a field also going through revolutionary breakthroughs. If these holistic systems begin to pass the test for skeptical linguists like Bender I think we'll have to admit that the LLMs of the early 2020s represented a meaningful scientific breakthrough. It seems, however, that the leading industry scientists are now admitting, though, that scaling alone will not push the LLMs where they had hoped in 2022—but of course they are far past simple scaling in their labs with complex new architectures and techniques. No matter how you shake it, though, contra Bender, this AI summer has been a hot one.
- The ghoulish specter of social disruption being led by a Trumpified Silicon Valley is terrifying to me. The actually existing manifestation of AI as a series customer service nightmares, deepfake videos, erotic solicitations, and frustrating search results sets a scary tone. I am much more aligned with Bender and Hanna on their political revulsion to this moment and the unexamined propagandistic way that AI has slipped into our daily discourse. Scientifically / philosophically I think I am more aligned with Bob! I look forward to his book as a way to think through some of these foundational questions.
Thanks for the conversations as always.
I think an excellent and thoughtful comment.
I see in the substack comments, some people are inclined to mock and be repelled by the discourse style of Hanna and Bender. But I appreciate their contentiousness. It's a breath of fresh air compared to what I usually hear on podcasts where hosts and guests talk for hours about how much they agree with each other, and how wonderful and insightful they each are.
I get the reasons why people were put off by Bender and Hanna - their focus on semantics can be annoying or seem come off as pedantic. But I think they're making arguments that are important to interrogate. Here's a related blog post and comment thread at Andrew Gelman's blog:
https://statmodeling.stat.columbia.edu/2026/01/22/a-decision-theorist-walks-into-a-seminar/#comment-2410001
I'm a bit more skeptic than Bob is on the promise/dangers of AI, but this interview is almost flipping my allegiances. Really head-in-the-sand stuff here. There's just no way that this technology isn't in some sense a time-saving -- I've definitely seen direct examples of that. There's no need for these two to be so caustic. They should take the time to find and interview people who are using it successfully, and add that to their understanding of the technology.
My critique was always that it's important not to buy the hype and just assume they will change everything forever. But it is going to have uses.
What job has it replaced? Is there a single named person who was fired because the program effectively does the job?
I work in elearning. We create online courses for big corporates. Many of my junior colleagues have lost their jobs because AI can write course copy significantly faster, and in some cases, significantly better than they can, at no significant cost to the company. Hence, those colleagues have lost their jobs. So there's a concrete example for you.
I question the ability of ai to write course copy better than professionals, unless the professionals totally sucked at their jobs. There is a lot of incompetence in the workplace and covid exposed a bunch of people who don't contribute in a meaningful way. Probably 40% of "knowledge workers" could be sacked with no noticeable change in institutional effectiveness. But this isn't AI replacement of workers.
Lots of people suck and their jobs, and those people will be replaced by AI. How is that not AI replacing workers?
If they suck at their jobs, then they aren't workers... they are employees. They get paychecks, eat up resources, go to meeting, complain to HR, they are completely incompetent, can spend hours telling you why they can't do something that should take 5 minutes and they spend all of their time gossiping and bad mouthing the company. Firing a bad worker doesnt mean replacing them, it is addition by subtraction. Blame it on AI or sun spots or global warming.
They are still people who need to put food on the table who may become unable to. That is the issue.
How did you get from my saying ‘it saves time’ and ‘has uses’ to quizzing me on job replacement?
Sorry if that is aggressive. I don’t know if has been proven to enhance productivity from an economic standpoint. In fact I don’t think there is much evidence that it does. There is some StLFed research but it’s pretty thin. We all have the same questions… I didnt mean to e a jerk about it because I dont know the answer either. In fact that is what annoys me about this topic, not people like you who want to learn. Again, sorry.
Don’t worry about it. I’m quite skeptical of a lot of the big talk about AI.
I just think this is a new enough tech where sometimes anecdotal evidence of use is not irrelevant. It’s saved me time at work today, pulling some stats. There’s no need for a serious critic to just double down and claim it will have 0 use.
Something that frustrates Bender, and me for that matter is that the tech is not new. I’ve been working on it for decades. It isn’t SPSS or R… it cannot perform descriptive statistics or regression analysis… but it will ingest data sets you feed it and provide analytic language that is plausible but complete garbage. You know that but many do not. It is infecting institutional workflows.
I honestly can’t speak to that — i’m not in tech! I had it doing mind-numbingly simple tasks for the non profit school where I work.
The technology at its heart may not be new but the chat bot interface allows fools like me to do lots of basic stuff we might not have otherwise. I’m sure this is happening all over.
Sorry, I originally hit send too soon. Bad placement of the blue arrow, Substack!
good lord could this have been any worse? I feel like Bob was caught off guard but their hostility and the whole thing went off the rails. I wish I was a good enough writer to express how insufferable I found these two to be.
I had to turn this off after 10 minutes. Came here to see if other people found Bender to be unbearably rude, obnoxious, and self righteous. Nice to know I'm not alone.
It's unfortunate that Emily's combativeness overshadowed her message and tended to derail the exchange in some respects. Still, it is an interesting dialogue.
My God. Haven't finished the conversation yet but couldn’t agree more. I wasn't sure if I was misinterpreting the very intense seeming combativeness and offense taking. Very strange but Bob rolls with it and it's definitely an interesting perspective to take in even if the messengers are more than a little cringe inducing.