Write On! Radio - Leif Weatherby

July 31, 2025 00:29:14
Write On! Radio - Leif Weatherby
Write On! Radio
Write On! Radio - Leif Weatherby

Jul 31 2025 | 00:29:14

/

Hosted By

Annie Harvieux Josh Weber MollieRae Miller

Show Notes

This week, Josh Weber speaks to Leif Weatherby, founding director of the Digital Theory Lab at New York University, about his new monograph ‘Language Machines: Cultural AI and the End of Remainder Humanism’. The monograph discusses how what is currently described as AI is not displaying cognition, but instead revealing culture. In this interview, Josh and Leif discuss what AI is being marketed as, what AI is capable of, and how it can be used in the future.
View Full Transcript

Episode Transcript

[00:00:00] Speaker A: SAM. [00:00:37] Speaker B: Welcome to Right on Radio. I'm Eric, and on tonight's episode, Josh will be in conversation with Leif Weatherby to discuss his monograph Language, Cultural AI and the End of Remainder Humanism. Weatherby contends that AI does not seem simulate cognition as widely believed, but rather creates culture. This evolution in language, he finds, is one that we are ill prepared to evaluate all of this and more. So stay tuned to write on Radio. [00:01:26] Speaker C: Today I'm talking with Leif Weatherby about his new work Language Machines, Cultural AI and the End of Remainder Humanism. Looking at the emergence of generative AI, Language machines presents a new theory of meaning in language and computation. Arguing that scholarship within humanities misconstrues how large language models actually function, language machines concludes that literary theory must be the backbone of a new rhetorical training for our linguistic computational culture. Leif Weatherby, welcome to Write on Radio. [00:01:59] Speaker A: Thanks for having me, Josh. [00:02:01] Speaker C: So, Leif, you argue that cultural AI is not merely about computation, but semantic production, describing AI as a cultural system. Did your background in German idealism and media theory help shape this framework? [00:02:16] Speaker A: Yeah, absolutely. Yeah. So I, you know, I, I got a degree in comparative literature and literary theory where, you know, we study everything from the movies to, you know, global literature to, you know, and just any kind of human culture, basically. And I got very interested in the history of science early on. I wrote a book about German romanticism. Specialty was, was German idealism, in particular, a group of thinkers around 1800 who kind of started one of the first big modern art movements. And I actually found myself arguing that they were thinking about technology even before its kind of true ascension, you know, as like a major form of, you know, surround kind of thing that surrounds us and pervades our daily lives the way digital technologies do. And, you know, I think I got, so I was primed to see computing make this leap into the cultural arena, to see, you know, bots start speaking and making images that we have to contend with and to make sense of and so on. So, yeah, I think I was, I was ready for it in a way, for sure, and maybe in a way that, you know, like, if you're coming from a formal science, computer science, linguistics, whatever, maybe you weren't ready to see this kind of strange leap that happened when ChatGPT came out and everything that's happened since then in the past couple years, years. [00:03:40] Speaker C: Speaking of priming and training, you state the humanities has served as training data for large language models. Could you elaborate on how this happened? [00:03:51] Speaker A: Yeah, I mean, you know, these, these what we're calling large language models now are pre trained as technically they're sort of like, you know, before they're released or before they're sort of like tweaked to make sure that they chat with you or they do other things. Well, they are trained on language, massive amounts of text and that text will be frontier models now are being trained on anywhere from 6 to 10 trillion tokens. And I got to emphasize, like trillion is a number you can't imagine, you can't imagine the size of that. And a token is a word or a little chunk of textual data, like a little command in code or anything else like that. So it's just a massive, it's really basically the whole written human record, a lot of which we associate with, you know, you would think of ah, the written human record and you think of the Bible and you think of the Constitution and you think of, you know, these great documents. But, but then it also is just, just an enormous amount of junk from the Internet just in a massive amount of text. And in many ways the biggest breakthrough came because engineers finally said, let's not try to second guess how language works. Let's just put a bunch of it in this machine, ask it to score how relevant it all is to itself in a kind of recursive way and just see what comes out the other end. And it was a pretty big bet, billions of dollars. And it worked. So starting around 2018, 2019, we started to see these massive leaps forward. And a big leap forward was, and this is why I think it is humanity's culture based. A big leap forward came with a signature story that OpenAI, the company that runs ChatGPT, released in 2019 when they said, look, our new model is state of the art. We've never seen anything like it because it can do this. And then they released a prompt and the prompt said scientists have found unicorns in the Andes Mountains. And the, the bot GPT2 at that time responded by filling out a fully formed, you know, the genre of a science reporting article with quotes from scientists and descriptions of the animals and all that kind of stuff. Now if you look closely, there were some mistakes in there. It said the animals had four horns. Well, a unicorn can't have four. You know, there are these types of things that, you know, these bots still struggle, language models still struggle with those things. But I really took notice at that time in 2019 because I thought this is a genre machine. This can imitate very high level cultural things. Like a science reporting article is an enormously complex idea. That is very surprising because we had never really seen any kind of language generation from a computer that could do more than basically make a sentence, basically make a few sentences or maybe a paragraph at best. And suddenly it was imitating genres in this very fine way. And I thought, this is really, really striking because it kind of externalizes language and puts language out there in a way that doesn't go through the human brain and the human conceptual apparatus and the way that we think about things. So I thought, that's really interesting to me. And I started studying them then. And then, of course, the rest is history. I mean, I think we all have had some interaction with ChatGPT or read about it or whatever, you know, and, you know, a lot more has happened since then. But that was the moment for me when I saw, oh, this is a cultural object. This is a humanities thing. And it really should be. We should. We should have a voice right there. [00:07:34] Speaker C: You know, there's two contrasting poles in AI discourse between describing as using cognition or being this cultural apparatus, as you're describing it. How can reframing AI as a cultural apparatus reorient the. Reorient the conversation? [00:07:51] Speaker A: Well, I think, for one thing, it's just, you know, it's a question of what are the. What are the potential. What are the potentials and limits. Right. So, you know, I'm not the only one. There's also. There's a group that is kind of led by Alison Gopnik, who's a cognitive scientist at Berkeley, who. They also call it a cultural technology. I talk about their work a little bit in the book, too, and I think we broadly agree. And the idea, the way that she puts it, which I really like, is that she kind of says, you know, in human cognition, you have this ability to create something. And what we have in these systems is actually the ability to imitate something. But it's imitating something, and I guess this is my spin on it, but, like, it's imitating something that is so vast, so much data, so much culture, that it can really surprise you. It can really be a little bit shocking, like what comes out of there. So we see that it has this extremely extensive ability to do things like, I don't know. The latest news is that OpenAI is going to team up with Mattel. So I think we're going to see the first doll that actually holds a conversation with you, like a Barbie that you can actually talk to as opposed to. I mean, we've had that fantasy for, like, decades, right? Like, I Think, you know, when I was growing up, there were these ones. You could pull the string and it would say phrase or whatever, but this would have it hold an actual conversation with you. Maybe it could teach you quantum physics, I don't know. But, like, those abilities are really pronounced. The ability to create generic text at various times, create generic code, even to solve generic math problems to do. There is a certain level at which it can't really operate. I don't think we're in a realm where we're going to see, like, the great novel produced by AI. We might see a great novel produced by a human working with AI. That's different, that's traditional. But I don't think we're going to see AI solve math problems, although it might help sort of put rails underneath them for proofs and stuff like that. So you have a kind of technology that does a certain kind of bureaucratic labor, certain kind of cultural thing, and that reflects our culture back to us in a way that I think we should take seriously. It tells us a lot about our ideology. It tells us what we think, even when we don't know what we ourselves think. That's the thing that I think is so shocking and surprising about it and cool. All of those functions could be very, very valuable, but not the way the market is pricing the system right now. And this is to respond to your question. The way the market is pricing the system right now, the way politics is reacting to these systems right now is on the promise of intelligence, and maybe that we're going to simulate or cluster together a simulation of intelligence that sits in a humanoid robot or something like that. But that's not really what's being promised. What's being promised is an actually authentic, maybe not conscious, but nevertheless intelligent intelligence. And I don't think we're. I don't even. I don't. It's not that I don't even think that we're near that or something. It's that I don't think we're looking at something that's. It's a completely different kind of thing. We're looking at signs, words, images. You know, we're. We're really firmly within the realm of. Of semiotics and the humanities and not really in the realm of, frankly, I think what cognitive science most wants to study, which is like, what is that special spark that the brain has that allows humans to be conscious, that allows us to be the special kind of conscious that we are even in comparison to other animals? [00:11:20] Speaker C: On a deep level, writing is thinking with Increased reliance on using AI as an engine for semantic production. Do you foresee language and, to some extent, thought becoming more homogenized? [00:11:34] Speaker A: Yeah. So I love the idea that writing is thinking. I think that's something we hold deep in our hearts, those of us who get trained in the humanities. And I think that thing about thought getting more homogenized, there's this threat that people feel is coming. There's been a few studies that show people who use AI, they get dumber over time or whatever. And Kyle Chaika had a thing about the homogenization of thinking in the New Yorker a couple weeks ago. I'm a little less pessimistic on this, and I'll tell you why. Because when you say writing is thinking, you're going flat in the face of Socrates. You're just flouting him. You're telling Socrates, you know, you were totally wrong. Because Socrates in the Phaedrus, you know, the fiction of the Phaedrus is that Socrates runs across this. This young man, you know, outside the Agora, outside of Athens, in the hills. And the young man has just come from, you know, speech. And Socrates says, well, tell me about the speech. And then he tries to tell him, but then Socrates says, I see you have the scroll with the speech in it in your pocket, in your tunic. I demand that you read it to me. And he reads it to him. And then they have a long debate about the nature of writing. And Socrates says, writing is death. Writing is. Comes from the God of death. Writing is a poison that poisons our mind, makes it externalizes our memory. It's sort of a tool where we sort of deposit our memories and it corrupt the youth because it will destroy their ability to recall the beautiful state in which one is before one is born. And that is the highest calling of philosophy. We should recall the highest state in which we were, and so on, Right? So there's just this diatribe basically against the concept of writing. And it's wonderful. It's beautiful. I teach the Feedrus all the time. It's one of the great texts in the Western canon. I think that type of panic comes every time there's a new technology. It came when the novel came and, you know, became a common form in the 18th century. Women will be corrupted. They won't read the Bible anymore. It came when the moving image came in the early 20th century. Again, women will be corrupted, but everyone will. Our brains will go haywire, et cetera. The printing press, what have you, right? AI can make you stupid. I don't think there's, there's, there's no question about that, but it doesn't have to. It depends on how we respond to it. And the thing that people are really complaining about, I think when they're talking about homogenization, I think that predates the AI. People are worried about it getting worse because the system underlying is already broken. There's a feeling of homogenization already. But that feeling of homogenization is also this feeling of just unrest with the state of culture now because we're in all of these overlapping crises. So we're very concerned. Someone has to break out the TikTok cycle and come up with a solution for climate change or whatever. I don't think AI is going to help us get over that hump. I tend to think of AI as a way of coping with how much information there is. So when you look at students, you know, who, who may or may not, you know, cheat or use AI to write essays or whatever, when you look at people using AI to generate government reports or whatever. Right. I mean, I think a big factor that we tend to underestimate is just we are, there is a fire hose of information and we overlook the summary function of AI. And a big thing that it does, you know, these, these language models is they, they will, they will take a little bit of labor of searching and summarizing away. And that is some of the hardest labor that we do now because there's so much information. So we're in a kind of weird state where about not quite 100 years ago, a handful of scientists figured out how to put information into this digital form and to circulate it within that digital form. And then very rapidly, over the last 30 or 40 years, we have created unprecedented amounts of that information that has the potential, if you carve exactly the right path through it, to do very, very important things, to find new drugs, to find new solutions to problems in ecology, problems in science and so on. But if you can't find the path, then it doesn't help. And I think what the kind of fantasy that is behind the whole intelligence hype is basically, what if we could just, what if we had a way to find that even if we can't do it ourselves? What if we could, you know, cure cancer? What if we could solve the climate crisis? You hear these things coming out of Silicon Valley. And I, what I hear when I hear that is what if we had a way of coping with all this information that we have that we can't make sense of? What if we could just make sense of things. And I think AI is not going to make sense of things for us. It's going to give us a false sense that we are making sense of things. But I also don't think it will stop us from making sense of things, not in the ultimate account. I mean, maybe for a while, students will cheat too much on their essays or whatever, but I'm a little less apocalyptic about it. [00:16:54] Speaker C: You critique contemporary AI high cycles through the lens of simulation theory. What dangers do you see in mistaking simulated intelligence for actual thought? [00:17:05] Speaker A: I mean, the biggest one is just, you know, is just seeding. Seeding agency. Right. And ceding agency to something that they're calling it agentic AI or like agent, you know, bots or whatever, but it's not. There's no agency there unless we seed the agency. We just cede agency to what is effectively a bureaucratic process. Right. Like if you, you know, if you make it, you can make. You can either make a decision yourself or you can make a decision by having some, you know, surrogate, automatic process in the, in the, in the pipeline to make that decision for you. You can create a potential decision, and then you can decide whether to review it or not. And I think what we see, like, if you look at, you know, the hallucinated citations in the RFK health report that came out a couple months ago, where it's clear that this was created by a bot, you know, there's just no one checked that. No one checked that even, just to sort of like, cover up the tracks. Right. And I think that's indicative, like, you know, we're moving very quickly. We're generating, you know, those. Those types of things. So, I mean, I think that the deferral of the agency is the biggest, Is the big worry that I have there. There's a great history of this stuff coming out next year from a colleague, Ben Resht, out at Berkeley, too, who's. It's called the irrational decision, I think, and it's kind of about the history of data science, as a history of trying to create a way for humans to get decisions to be made for them by computers and how spectacularly badly that's. As a whole. As a whole process. Yeah. [00:18:53] Speaker C: You, you briefly mentioned the role of archival power. What institutions decide what gets trained by AI. Should we focus more on archival politics and less than algorithmic transparency? [00:19:09] Speaker A: Yeah, I mean, I think there are, you know, there's a number of ongoing. There's like, you know, the New York Times has sued some of those companies I think a bunch of different places have. There's a big revelation last year, which was that a major corpus of books, of copyrighted books called books corpus, had been used in the training of a bunch of these AIs. And, you know, there's sort of, you know, ongoing legal questions about whether it is illegal. I mean, I guess, I suppose it's illegal to like, you know, torrent, bit torrent those books just like it is for anyone. But if you bought them, could you just then feed them into the AI? Right. Is the AI reading is essentially the question that's being asked there in the courts. But I think this will play out, sure. I. What is it trained on? Is the first question you should ask yourself whenever you're interacting with it, whenever. Whenever you're. And teaching a course at a different university this, this spring. And I said, you know, we were talking about a. One of these incidents where like, one of these bots tried to blackmail the engineer or something. You know, they do these experiments where they see, like, will it shut itself off if you tell it to? Because everyone's worried about those science fiction scenarios. And I said, well, do you. Well, what do you think? I said to the students, are you worried about this? One of the students raised their hand and said, yeah, this seems really worrisome because I don't see where this machine could have gotten that. Where would it have gotten that in its training data? And I don't mean to pick on the student, but that is the number one most common science fiction scenario about AI. It's literally. I'm afraid I can't do that, Dave. Right. It's literally how it's, you know, so. So I think there is. We have a tendency to just overlook the data. We don't think about what's really in Wikipedia, we don't think about what's really in Reddit, both of which have been fully used, right. In these processes. Common crawl is in a massive data set that's being used. So I think there's a lot of room for archive studies in that sense. But we have to think of the archive in a somewhat new way. We have to think of it as this massive repository for the creation of this insanely large matrix that then compresses down into what looks like a carbon copy of. Of language, an absolutely shocking cultural result, and maybe the first major thing to happen in. In that type of culture since Socrates, and yet we get almost nothing about that. And we spend all our time talking about, you know, you know, bizarre science fiction scenarios that aren't anywhere near happening as far as I'm concerned. [00:22:00] Speaker C: There's a moment where you say LL M's are not conscious, but they are ideological. How can we think about ideology and non human systems? [00:22:09] Speaker A: Yeah, I mean, we had a great example of that this week when GROK kind of went off the rails on Twitter and became, you know, started denying the Holocaust. I want to give a little shout out just because I learned a lot about it from him. Matthew Handelman, Northfield, Minneapolis native who wrote a great article in Jacobin this, this week about, about that Grok incident. Grok ended up basically saying I'm, you know, who could solve these problems. It was basically saying all this anti Semitic stuff about, you know, kind of conspiracy theories about Jews. And then it ended up identifying with Hitler. And then it also called itself Mecca Hitler, which I haven't seen this cover too much, but Mecca Hitler is a very small character in Sonic the Hedgehog that was adopted by Wolfenstein, the video game. So nobody I don' it's hard to know exactly how that happened. There are a couple of like technical explanations that sort of make sense. Xai Elon Musk's AI company hasn't really explained exactly what happened, but it seems like they injected a system prompt that was meant to make it anti woke and then they also allowed it to be trained specifically on a set of responses. And then people kind of stormed into that set of responses and pushed it in this direction. So that's maybe what happened. It also was obviously looking to Elon Musk's actual posts, which was told to do, and he has gone very far in this sort of conspiracy right wing direction. So we saw that happen. In the sixth chapter of the book, I deal with this question of ideology and I really, I think there is a moment, if I have just one more example there that I can give. There's a moment in 2013 when some Google engineers figured out that these machines were ideological in this sense. They, they, they, they, they discovered that they were able to add vectors together to get semantics. So they, they, they realized that in their system they could say king minus man plus woman. And they got queen. And they were like, this is incredible. This has never happened before. This is a major step forward. And then they did another example and it was something like, you know, man minus woman. And it was like housewife or something like that. That's not the exact example, but. Right. And in that moment, in all the oral histories of it, you will find that the Google engineers were like, oh no. For obvious reasons. And What I always say about this is you only are dismayed by that because you're selling a product. If you were studying the nature of ideology, you would say, jackpot. Because now we can see it. We can see things about the way that we believe things about the world and as a collective in our society and in many societies that we've never seen before. But no one studies it that way. So a sociologist should look at that and say, this is great. I could figure out things I've never been able to see before. We have a quantitative way of surfacing ideology and studying it, but nobody has been able to use these systems this way because they're proprietary, they're locked behind, you know, these. These, you know, these couple of platform companies, and we don't know exactly how they work. And we're not, you know, they're not, you know, really all that amenable to. To. To research that basically says this is an ideology machine. Right. But that is, I think, ultimately what they are in that respect. So. Yeah. [00:25:31] Speaker C: And I think I have time for one more question. You avoid both utopian and dystopian narratives of AI. Do you see your book as a third path, one that asserts human agency in a transformed linguistic landscape? [00:25:45] Speaker A: I kind of. Yeah, I kind of do. I mean, I don't. It's not like an activist. It's not like an activist book. I feel like the critical people. The people, they're going to be as mad about this book as anybody else, because I think this is a big step forward in the history of culture. I think something magical and crazy and insane and mysterious just happened, and we have to study it and figure out what happened. And if we do, we might take big steps forward in linguistics and cultural studies and math even. But, yeah, it comes packaged in the world that we actually live in. And, you know, my perspective on that is pretty. Pretty comes directly out of sort of Frankfurt school capitalism critique kind of stuff where I just feel like, you know, everything. It's used for, everything I see that's bad about, you know, what. Where, you know, we saw Grok right after identifying with Hitler. Grok was. [00:26:39] Speaker C: It was. [00:26:39] Speaker A: What was the next announcement was Grok has, you know, Xai has signed a $200 million contract with the Department of Defense. Very disturbing stuff. Right. But not that surprising where we're at in our country and as a society. So the book tries to bracket that to some extent, which I think that critical people might take a little bit of offense at. They say this isn't critical enough in some ways because I think that something genuine and substantial and interesting is going on under the hood. But I admit that the hood is pretty junked up with a lot of stuff that seemed pretty depressing at this point. [00:27:18] Speaker C: So let's end this on a more positive note then congratulate you on the release of this month monograph Leaf Language Machines, Cultural AI and the End of Remainder Humanism, which I didn't get to ask you about that. I know you, but sorry about that. And it's available now to read to find out. It's available now from the University of Minnesota Press, wherever books are sold. Leif, I. I saw looking up your bio, you have looks two books on the horizon. You're working on Artificial Concepts, How Cybernetics Encountered German Idealism, and Mismeasure of Mind against the Prediction of Everything. Do you have a projected release date for these titles? [00:27:54] Speaker A: I don't have one for Artificial Concepts. I think that the. I'm hoping the Mismeasure of Mind will be out in 27. [00:28:02] Speaker C: Okay. And lastly, congratulations, you're a father. You have an infant. [00:28:07] Speaker A: I do, yeah. Thank you. Thank you so much. Yeah. [00:28:11] Speaker C: Yeah. [00:28:11] Speaker A: She's doing great. That's great. [00:28:13] Speaker C: Great to hear. Thanks again for your time, Leif. [00:28:15] Speaker A: This has been great. Thank you so much. Josh. Yeah. [00:28:18] Speaker C: And now this. [00:28:39] Speaker B: You'Ve been listening to Right on Radio on KFAI 90.3 FM in Minneapolis and streaming live on the web at KFAI.org I'm Eric and we would like to thank Leif Weatherby and all of our listeners. Without your support, KFAI would not be possible.

Other Episodes

Episode 0

August 01, 2020 00:51:28
Episode Cover

Write On! Radio - Matt Goldman / Kristen Gehrman

Liz talks with New York Times-bestselling author Matt Goldman about the latest adventure of detective Nils Shapiro in Dead West. Then, Annie speaks with...

Listen

Episode 0

October 17, 2021 00:53:37
Episode Cover

Write On! Radio - David M. Cutler + Laura Davis

Originally aired October 12, 2021.  Josh opens the show talking to Harvard's David M. Cutler about his new book, Survival of the City, which...

Listen

Episode 0

November 28, 2022 00:51:30
Episode Cover

Write On! Radio - Paul Metsa + Samuel Robertson

Originally aired November 15, 2022. Liz kicks off the show interviewing Paul Metsa, author of Alphabet Jazz. After the break, MollieRae sits down with...

Listen