S4 E21 - Erik J. Larson - The Myth of AI and Unravelling The Hype

In this episode of the Profound Podcast, I speak with Erik J. Larson, author of The Myth of Artificial Intelligence, about the speculative nature and real limitations of AI, particularly in relation to achieving Artificial General Intelligence (AGI). Larson delves into the philosophical and scientific misunderstandings surrounding AI, challenging the dominant narrative that AGI is just around the corner. Drawing from his expertise and experience in the field, Larson explains why much of the AI hype lacks empirical foundation. He emphasizes the limits of current AI models, particularly their reliance on inductive reasoning, which, though powerful, is insufficient for achieving human-like intelligence.

Larson discusses how the field of AI has historically blended speculative futurism with genuine technological advancements, often fueled by financial incentives rather than scientific rigor. He highlights how this approach has led to misconceptions about AI’s capabilities, especially in the context of AGI. Drawing connections to philosophical theories of inference, Larson introduces deductive, inductive, and abductive reasoning, explaining how current AI systems fall short in their over-reliance on inductive methods. The conversation touches on the challenges of abduction (the "broken" form of reasoning humans often use) and the difficulty of replicating this in AI systems.

Throughout the discussion, we explore the social and ethical implications of AI, including concerns about data limitations, the dangers of synthetic data, and the looming “data wall” that could hinder future AI progress. We also touch on broader societal impacts, such as how AI’s potential misuse and over-reliance might affect innovation and human intelligence.

Transcript:

John Willis: [00:00:00] Hey this is John was the profound podcast, which actually, as you most know, right now, it extends a little bit beyond just Dr Deming. It has some stuff and, you know, maybe my Holy grail is to connect them to all this. I craziness, but I've been talking a lot about a book. I just read the myth of AI and I have the author Erik, you want to introduce yourself.

Erik Larson: I I'm Erik Larson. I'm the author of the myth of artificial intelligence. And also I write a substack Colligo C O L L I G O. It's a Latin word that means spring. Come together, bring together. So Colligo, C O L L I G O and a lot of the discussion that I started with the book is now happening online.

It's really, we, I really get engaged readers there. So I've been really happy with that. I hope to see people there. But yeah, I'm happy to be here. It's good. Good to meet you, John.

John Willis: Yeah, no, it's great. I, [00:01:00] I'm so I feel honored that you came on cause I've been watching your videos and you just, you're, you're fascinating.

Person intellectual for sure, and then just fascinating the way you think and and break things down, which is pretty awesome. So I'm going to go right into it. I'm I like the sort of the pierce things happen in threes. I didn't even realize that like, we're equal in that pace. And so I, and I don't want to hang on the AGI or not AGI, but I'm trying to get my head around, like, There are like three camps of incredibly smart people, way smarter than I'll ever be, right?

And one camp is that, you know, well, we know the Ray Kurzweil camp, right? Like, you know, 2029, right? Like it's going to be AGI, right? And then there are really smart people who will try to explain to me all the math and all that stuff. And, eh, stop, leave me alone. I don't think he's right. And then there's you know, the, the Melanie Mitchell, I read her book and, and, and I was just, I loved her book.

Right. And I saw her in interview and she said, ah, if you're, if [00:02:00] you're going to press me a hundred years, but she didn't say no. And then there's my friend, Dr. Woods you know, who is you know, he's, you know, most of the people who sort of listened to this podcast, know him very well. You know, critical safety, human factors you know, did the post mortem or NSF grant for the three mile island to figure out what are human factors when people hit wrong buttons, you know, like, why, how do we get.

To understand the buttons and we'll come back to a little bit later. And, and, and, and in yourself and, and, you know, like where, like, I won't call you, you know, it makes things simple and non AGI, but you, you're, you lay out a really strong foundation about what Dr. Wood says that these John, these are just machines, you know, this is this computer and like help people, you know, like you've got a big megaphone in a, in a, in a smaller group.

So I mean, what are your thoughts? I guess the thing I get confused is like all these brilliant people and they, like they all can lay down really strong arguments.

Erik Larson: Yeah, it's, it's [00:03:00] interesting. I think it's one of the scientific, you know, fields where, there it's almost impossible to, to draw a circle around what's an acceptable argument in AI and what's not.

So, so you get, you can get futurism and then. a linear algebra, you know, in the same, in the same space, somehow in AI, it's very difficult to do that with biology or something else. Right. So so, or physics, I, I, well, I mean, that's debatable. But, but so I, like, since the inception of the field in the 1950s, there has always been a kind of speculative element to This notion of artificial intelligence.

It's never just been databases and computer science. It's always had this sort of like we're building this new mind idea. And frankly, like, not to be cynical, but that's been used to good effect by people who Thank you. Want to convince the, you know, the government, the department of [00:04:00] defense, like I did to give massive amounts of money to, to, to do that.

And, and so it's, it's very interesting if you take it as a scientific discipline, basically it would be a subfield of. Computer science, right? If you take it as a scientific discipline, it's very hard to underwrite a lot of the discussion. It sounds sociological, religious even, right? Like, you know, if you listen to Ray Kurzweil, you know, talk about the future of computation, it sounds like he's talking about a religion and there's a heaven where we upload ourselves.

How is this all happening? Like there's no, this, there's no way to empirically verify any of this stuff. It's, it's absolutely just, you know, people. Who are excited for, for whatever reason, it can be very valid reasons, but who are excited about computation and want to expand it into an entire worldview.

And so AI is where all that stuff comes together. I've been fascinated by it. I was in the field for 20 years. [00:05:00] And you know, it's, it's just always been fascinating how much human psychology can go into this one ostensibly scientific discipline. Right. So, yeah.

John Willis: And that's what makes your book so fascinating, very much like Dr.

Woods. Dr. Woods is like probably the smartest person I've ever been able to have a conversation with in my life, right? And I've known a couple of smart cookies, but like, the thing is, he has so much knowledge, and that's why I kept seeing your book, because it isn't just being there. Sort of a, you know, PhD in AI, right?

It's philosophy. It's and, and like, it was just, you know it was so uncanny as I'm reading your book. And, and I think I was telling you before we started. So I'm like, again, I'm gonna be today. Just give me threes. I'm like, okay, Dr. Woods. Then is it can I say that? Symbolic is deductive reasoning is sub symbolic or neural networks.

You know, inductive [00:06:00] and then abductive is the thing that we're missing. And, you know, a good professor will say it's not that simple. John. But, but then I'm reading your book and you're giving really strong arguments about, and you're not simplifying it, but can you explain like, you know, within sort of bounds of a podcast of what, you know, cause you talk a lot about the difference and why the, you know, the, and how they sort of match up in this narrative.

Erik Larson: Yeah, I think it's what you said is basically correct. The, so you, we know a lot about inference. We, you know, been studying inference since Aristotle at least probably before Aristotle, but we got syllogisms. You know, all, all men are mortal. Socrates is a man there. So that, that was over 2000 years ago that we'd been studying.

How do we arrive at conclusions? And the basic idea with inferences. We have two components. It's what you observe, right? And it's what you already know, and they have to be combined some somehow. Right. [00:07:00] So if you don't know anything at all, you have a kind of blank slate idea. Where, you know, you've got to somehow get all your knowledge from what, what you, what you observe.

And if you can't observe anything at all, you have a kind of brain in the vat idea where everything's got to be already understood and you can't change. So there's this idea that inference. is it's kind of ubiquitous. You basically it's a condition of us being awake. We're just constantly inferring.

And and it's given what I already know and what I see what's reasonable to think next. That's basically what it is. And so if you think about artificial intelligence, it's it's got to capture that, right? So we have natural intelligence. We're constantly inferring. It's really central to any cognitive system.

To say like, what's reasonable to think next, right? So how do you get a machine to do that? That's the question of inference. And so what I wanted to do in the book was say, look, we know a lot about inference already. Let's start looking at this and see [00:08:00] where computation fits into this puzzle. And it turns out that most everything that we're doing right now is inductive.

Which is to say. You know from prior prior examples we want to, we want to be able to have a predictor or a classifier or something, we don't have some, some way of, of understanding the world based on prior observations. And so if you look at a data set in neural networks, it's just prior observations, right?

So it's, it's, it's a huge inductive program, but we know that induction is not adequate. For general intelligence. So we already know that we can't get to AGI. That was basically the point of the book. And and I think it's a valid argument. I think it's still valid, right? I think when we look at, large language models, we see that we're having all these problems with hallucinations and they don't understand and so on, and we don't know how to scale them and we're running out of data.

Well, we're always going to run out of data and there's always going to be these problems because we've only got one piece of the puzzle. We're only [00:09:00] doing induction. It's impressive. And it moved the needle. And I've been the first to say that, but it's still just induction. So we know that we don't quite have the full picture yet with, for machine intelligence or for that matter, for us understanding intelligence itself.

Right. Yeah.

John Willis: I think that's, you know, let's see sort of the, the interesting thing. I mean, even before you get to consciousness, like they, I guess the, the question I want to ask is. You know, Dr. Woods said this to me that he felt that like in the 70s or maybe 80s when he was just sort of going, you know, starting his career, I think in the 70s, I mean, he worked with Herbert Simon and those guys.

And, and he said that I think we could have programmed abductive reasoning. I mean, do you believe, I mean, is there a computational model that supports first actually give us the, like, the, the armchair abductive reasoning, and then and then is there a computational model that could, could have, and I, I totally agree with that, we're not there, and we've gone down a [00:10:00] path that sort of excluded that.

Erik Larson: Yeah, I mean, the basic, the idea of abduction is, it's actually a kind of form of broken deduction. So it's, it's, it's, it's actually an invalid form of deduction, which, which is to say that you don't, you can't guarantee the truth of the conclusion given the premises. And then in deduction, you can guarantee that, well, there's validity and soundness and deduction valid means that, you know, if the premises are true, the conclusion has to be true.

Soundness means the premises. are in fact true. So this is true. But in abduction, we just, we have a kind of plausible inference, but we don't know if, if in fact it's certain or not. So it's kind of a broken deduction. And I always use the example. If you take a very simple deductive argument called modus ponens, if a than b.

A, therefore B. So if the, if it, you know, if it's raining, then the streets are wet, [00:11:00] right? It's raining. Therefore, the streets are wet and that's deduction modus ponens. Abduction is if it's raining, then the streets are wet. The streets are wet. Therefore, it's probably raining. It might be raining. Right. It's a plausible inference that it's raining.

It could have been that a, you know, a, a tanker, you know, was flying overhead to the forest fire. It could have been that the fire hydrant, you know, around the corner was broken by in a car accident. There's all kinds of reasons that the streets could be wet, but, you know, so you can't actually say with certainty.

So it's that it's a broken form of induction. A, if a, then B. Instead of a b, therefore, maybe a Right, right, right. And, and it turns out that we do that all day long. That's basically the only kind of inference that we use. 99% of the time is probably, there's probably that it's always up for revision. Right.

Constantly. Like we never, you know, a, a triangle has three sides. That's gonna be true in the morning. And it's going to be true at night, but everything else is just [00:12:00] constantly changing and updating. Right? So abduction turns out to be a very, very ubiquitous form of inference for us as, as humans. And the question is, what do you do about it?

And, you know, for, for machines, and there have been attempts over the years to, to, to, to do it, but the problem is that you end up basically pulling in. most of the world in order to say, right, like there's so many ways that I always use this example. It's kind of shop horn, but There are so many ways the streets can be wet that you end up having to specify, you know, most of the knowledge in the universe, you know, at, at, you know, at the limit in order just to say something simple.

So computationally, it's very difficult to do. It's yeah.

John Willis: Yeah, no, I get it. And then I always think about the, you know, to, to me, the like Columbo character is the classic, you know, sort of abduction, right? He walks into a murder scene and something about the blue shirt. And that 1 guy makes him [00:13:00] do some maybe deduction or induction.

And then he gets to the point that there's mud on the shoes and the murderer would had to cross the field. Right? But, and I think you did a good job of like, how you sort of thought through that. I don't think you say float, but float back in between sort of inductive, but it is that, but, and I think that the other thing that sort of seems to be missing is, you know, is that sort of intuition that the things we don't like with the, you know, this is going to be a rabbit hole, but like, I think about AlphaGo, right.

And I think that was the move 37 where said AlphaGo, like you know, makes a move that. no human can make, but then he makes in game four, like move 78, which seemed to be this move that they call the God move. And humans have this sort of ability, you know, the da Vinci's or, and, and I guess, you know, I, I, you know, I sort of lean on what you're saying is, I think those things, I don't know that we, We'll ever understand as humans, right?[00:14:00]

Erik Larson: Yeah. Yeah. I mean, I, I'm not sure. I can comment about AlphaGo in particular, but, I mean, we can talk about how that, that program works with reinforcement learning and so on. But yeah, I mean, is it so like the idea that we, so that, that yeah. Our cognition is complicated by the fact that we are so embedded in our environments.

So, you know, even what's in your stomach at some given point has something of it, right? And so like with a machine, I agree with your friend, by the way, I'm not familiar with who, who this person is, but when he said, look, they're just machines, I think that's actually the right. You know, if you want to cut through all the discussion, like they really are, just, it's a technology,

it's

something we built, it's a machine.

And so it's going to have limitations that biological systems like, like us, we don't have and vice versa. Like I can't, I can't compute. [00:15:00] You know, I can't multiply large numbers in my head, but my computer certainly can, you know? So, so there's a kind of, there's also an interesting question. About how we can most work with a different kind of intelligence, if you will.

But this, you know, right. So, like, I think it is interesting how, how can we make society better by using AI, right? And so, you know, so I, I definitely spent a lot of time working and talking about that and thinking about that. But if you go to this, this issue of AGI, what we're doing is we're trying to say this machine is now, we're going to force it to be like a biological.

Living cognition like we have, right. And I think that's just hopeless. I mean, frankly, look, there's never say never. I don't have a crystal ball, but I think it's a very, very strange project to try to do. And I, I'm not surprised at all that we're, we just have endless difficulties, you know for 60, 70 years running now.

So, yeah.

John Willis: Yeah. And then, you know, I mean, it's. [00:16:00] Like, I like your thing about the stomach. I always say that, you know, like I might be incredibly productive on a Monday because something really nice happened at the coffee shop and somebody just said, I look, you know, like Bruce Willis or I don't know, some crazy thing.

And then on Tuesday, somebody gave me the finger on the highway and I'm less productive, right? Like the computers are never going to have those, you know, like even a bad smell on the way to work could change, you know, so many Yeah. What about Pierce? I, you know, I think this, this character, he just seems, you know, I just did a thing with my friend, Jabe, and I asked him why why Pierce was important to him and, you know, why is he important to science?

And, you know, we, we can sort of table why I love in your book where they said that Harvard, like, sealed his, his, you know, papers and writing, like, how terrible do you have to be to have that happen? But, but why, I get the sense that you feel Pierce was a pretty important player in, in, in our, [00:17:00] you know, what should be, you know, Well, he

Erik Larson: was so, he, he was far in a way, probably the most polymathic genius that we had seen in the United States of, you know, up to that point.

This is 1860, 1870 and so on. And and he actually was, his dad was the professor of Harvard mathematics professor. And so it's, it's actually purse. It's, I don't want to say,

John Willis: sort

Erik Larson: of like Neanderthal and Neanderthal. So if you say Neanderthal, I

John Willis: know, I've been told the academically

Erik Larson: technically, yeah, his family name is purse technically, but I I don't care, like, you know, but but no, I mean, so he was interesting in this polymathic sense that he could kind of do anything with his brain, right?

Like he, he was like extremely talented in chemistry as a child. He was extremely talented in statistics and, [00:18:00] and mathematics. As a child and in logic. And then he tr he ended up you know, he was involved in what's called Gravimetrics. I forget the, the department in the government where they basically, they're trying to get an exact measurement of gravity and all kinds of stuff right.

On this, right? Yeah. So he, there was this sensitive equipment that he was in charge of. And so if you follow his story, you realize his brain can just go anywhere. Like, I just would have loved to have known him, but he was a real pisser though. Like he was, he was, yeah. So he, he, and he would do things that we would consider irresponsible.

He lost a bunch of government equipment

John Willis: in Europe,

Erik Larson: right? He just sort of, you know, got working on something else or maybe, you know, a very attractive female. I don't know, like, you know, but you get a sense, you get a sense that he, what he was. He was doing stuff that, that wasn't that was not ever sinister, but sort of irresponsible.

And so he had that kind of genius, but you know, not [00:19:00] quite, you know, you know, above board on everything you know, feel to him. And but what, what I think his contribution to AI is actually really significant because he was the first person. Really in the 19th century is predating modern computation, but he's the first person to say, there's only a few ways that we can do this.

Let's start looking at them. Right. So he was the first person that kind of really see inference as something that's absolutely core and central to everything that we do and like, how much can we understand and what types of inference, so he put in place that tripartite system of industry. Like that's, that's from purse.

Right. So I think that was actually. One of the key moves, you know, he's in an important to computation as Aristotle was important to logic. But what's interesting is almost nobody knows about him. So that's one of the reasons I wrote the book was just to say, this is a great story because this guy's like an unknown hero, [00:20:00] you know?

So, yeah. And he, it was, it's always been that way by, by the way, with his life, you mentioned the Harvard library, like had all, you couldn't, you couldn't, you read anything that he wrote until the 1950s. It was sealed, you know, like, and so it's, he's always had this kind of hidden, you know quality to him, which is just.

Bizarre given his contributions.

John Willis: Yeah, it's interesting. I was telling you earlier that, you know, about the book I'm writing about the history of AI. And, you know, I was telling a good friend of mine, he's like, you should call this rebels. Cause like all these guys are like, you know, the Muller color and pits are like, you know, those guys were crazy.

They were like Jack Kerouac characters, you know, they, you know,

Erik Larson: it wasn't one of them. Like almost sort of on the street for a while. Oh, yeah.

John Willis: No, pence was a goodwill hunting, basically. I mean, he was 13. He was, he was living on the street and the story goes just to, you know, I want, I want you to do most of the story, but the story goes that he literally was being chased by a gang hung out [00:21:00] in the In the Chicago Public Library, because I guess they were, they were going to kill him or something.

He starts reading Bertrand Russell's, you know, mathematical, whatever it is, right? And send Bertrand Russell a letter saying that he's incorrect. Bertrand Russell doesn't know he's talking to a homeless 13 year old. You know, it's just like, you know, the next thing you know, he's working with Norbert Wiener and yeah, it's, it, the, the, the whole history of Alias has got these incredible humans.

Like, you know, the one thing I want,

Erik Larson: yeah, no, I have the same. I've drawn the same conclusion about the field. Like, they're just really, really interesting people to get attracted and get kind of caught in its orbit.

John Willis: Yeah, yeah, totally. So one of the things, and it's, you know, this definitely gets a little cheap, but I was going back and I was going through my notes and you talked about the intelligence era and, you know, I think, you know, everybody who listens to this would know Turing and the Turing machine and all that.

But, but you sort of tied that to like, what, what's sort of wrong [00:22:00] with, you know, his, How do we get from the father of AI to like, maybe he didn't get it right in the first place? Yeah, I mean,

Erik Larson: I, you know, I, what I, so partly I, I, you know, you know, as a writer that you need to be able to kind of have a, like a hook or something.

So you know, but you know, so part of part of saying it's an intelligence error is like, I need a hook, you know, but it's but it is factual that he was driven and so was Claude Shannon and very understandably, right? Like, I don't think anybody else in the 1940s would have done anything different, you know?

So but they were very drawn to you Like we have a puzzle or we have a problem or we're playing a game like checkers or chess, right? And, and it's got a definite, it's a well defined situation, right? They get, they think of a chess board. It's like everything on the chess board. It's not going to change day to day.

The pieces will move [00:23:00] around, but the, the big, the game board is set. Right. Well, so I think they took that view of intelligence from the start. And it's just no wonder that when we start wading into the deeper waters, we're going to. Find problems because a lot of intelligence is not problem solving, right?

So I, I called it an intelligence error, which is just to focus the bigger problem on a piece of the problem, which is problem solving and, and Turing adopted that as a working hypothesis. And then he said famously in, in that 1950 paper maybe we can find a way for the, the computer to learn itself so that we can get, you know, so that it can sort of grow into this broader cognitive system.

And, and I think that was his way of saying, I don't see how a chess playing computer is going to become like, you know, my mom or, you know, my friend or I like, It was his way of saying, so, but it was just hand waving, right? It was just, I don't know. I mean, let's, you know, so yeah, that, that's the [00:24:00] intelligence error.

Yeah.

John Willis: I think that was, it was somebody's book. One of the books where they said that, you know, like, you know, when deep blue and I, I know that's more of a symbolic expert system, but not like the normal, that was when they beat you know, if you carry Kasparov, like Karakash could ride a bike after the match.

People who couldn't, you know, I, the other thing I think a lot about, like, so I, you know, I, I, you know, like, you know, I, I, I, not that there's a side, but like, I, I sort of had to lean, I would say that, you know, that, that, you know, everything I know about, like, even spiritually about, like, how humans know that, that you, that you're closer and Dr.

Woods is closer on the right side, but. You know, I'm new to this AI thing, right? I tried to ignore it. You know, what happened is I was about three quarters of the way done with my Deming book. A friend of mine showed, like, the earliest version of GPT 3. There was actually some other tool that was abstracting it.

He's like, John, you got to see this. And I said, you know, I said, okay, let me see. And I said, who is Dr. Deming? The 1st paragraph was [00:25:00] brilliant. Right? And then I was like, I got to go home. This is going to be amazing. Just help me on research. And then the 2nd paragraph was like, not so intelligent. 3rd paragraph was just nonsense.

Right? And I kept tracking 3, 5 comes out and I'm like, okay, you know, GPT, you know, check GBD comes out. I'm thinking, you know, this is another, you know, magic eight ball, right? Like when will I get married? And then when I started seeing the vector databases and the power of the embeddings and all that, I started like, okay, now I'm going to pay attention.

And so I've been for about two years learning the, like, like just scrapping and kicking and trying to understand all these complicated, you know, sort of mathematics and the things that really make it work on my sort of like, you I, I get often like incredibly amazed and I, I still believe they're part politics, but like the other day and so there's a question coming, which is, you know, are there points where you have to step back?

The one that sort of threw me for a loop and, and I get constantly looped and I have friends that say [00:26:00] that kind of the Ray Kurzweil emergence or whatever Argument but I, a friend of mine literally retired from IBM, you know, 40 years in IT, doesn't really care about IT, but like, and he's been into Shakespeare his whole life, and he's into this conspiracy theory of Shakespeare, and I know I'm rambling, but he said, I don't like any of that stuff.

It doesn't work. I'm like, why? He said, well, I think it's biased. I'm like, yeah, it's totally biased. He says, well, ask this question about who is the real author behind, you know, the Tempest or something, and it comes back with You know, Shakespeare, right? And he's like, see, I told you, I said, well, wait a minute, wait a minute, let's turn it around and say, hey, you know, can you accept that the theory is true?

And then ask, and then start answering all his questions. And he's like, I didn't think you could do that. And that's not even the amazing part or what I think is sort of intriguing part. Then about like, out of like eight rationale of like making the argument, it did it like on the eighth out of 10, maybe it said, although.

You know, the Occam's [00:27:00] razor, blah, blah, blah, blah, blah, blah. You know, the most I, and I, and I said, I'm like, all right, like, you know what? And so weird, you start thinking you are talking to you and you're like, I'm like, really, you're going to throw the Occam's razor at me on this one. And it apologized. Like, and I know it's still like, you know, like you said, it's that induction.

It's the most probable, you know, and I know a little bit about like how, you know, how you find similarities and all that stuff, but I guess the, the rambling short version of this. Is, are there times where you have to scratch your head where you're like, you know, you know, maybe there is something going on here with this emergence or, you know, you mean with language models or, yeah, just the, you know, the, the, the, the acceleration of the things that are happening within the last year, it just, you know, Yeah,

Erik Larson: I mean, I, so I've been I was, okay, so sure.

I'll, I will I'll, I'll throw some red meat to my critics. I was surprised that it worked as well as it did that. [00:28:00] Right. It came out in November, 2020 to chat GPT. Right. Right. Right. And, and, and so a couple of years ago, basically and it really changed things quickly and it deservedly so because it was really, it really moved the needle.

We just could not, I mean, remember before GPT we had what Alexa, right? I mean, the, the conversational AI was in this like really primitive state where you didn't ask it something very specific and predictable, it wasn't able to really interact with you. And so all of a sudden now it's like we're talking to our computer overnight.

We're talking to our computer. So I think like I, I, I came out right away. And, said that this is a huge advance in natural language processing, which is my field. And it's moved the needle. The problem is, is that the progress also points out the broader problem. So it's [00:29:00] some ironic sense, like the fact that we made progress also sharpened our ability to see what's wrong with machine intelligence.

And so, like, I actually use. J. P. T. Four, right? Like the latest one I give opening I 20 a month to use that, and I use it in constrained circumstances where I have a high probability of succeeding or like augmenting what I'm trying to do. But it's very obvious to be interacting with that technology over the last few months that it literally has zero understanding of what's going on.

So it's an emergent intelligence that's not grounded in. No, I know what, you know, like, like there's a sense in which it's very hard by the way, philosophically to defend this position, but there's a sense in which I know there's something going on inside you. Like, I know, you know what I mean, but it's very hard to argue this philosophically, right?

Like, you know, there's this problem of skepticism, but in the case of, of GPT, I'm very [00:30:00] certain that there's nothing going on inside of that. It's just a very powerful emergent. Right. And so the question is, is can we scale that to get the rest of it? And I think already the answer is looking more like no rather than yes.

I think, frankly, the field was more bullish on getting to AGI a year ago. And now we're slowly getting into this kind of almost kind of winterish feel where nobody, if you'll notice, there's no big updates coming anywhere at any time, you know, anywhere soon or anytime soon, rather, and the rhetoric a year ago was, you know, next week, we're going to change the world.

We're going to have, you know, I think opening, I said their next version. Was going to reason like a PhD you know, personally be able to construct very, very complex plans with multiple steps, keep track of stuff. Right. And nobody's, we haven't heard anything from [00:31:00] that. So, so, so I think that like, I think it's, it's, it's almost paradoxical.

I think we need to acknowledge when AI makes progress, because if you don't and people like me look like. We're just sour grapes critics, right? So I, so I think like, I, I'm fully, happy about the progress that the field can make, but I also don't see any fundamental reason for me to change my position.

And I don't see how GPT actually you know, is, represents something that should make me go back and say, Induction really is enough, right? I think that I think it's actually being proven that in still induction is not enough, right? So, yeah,

John Willis: yeah, I think there's so many complaining. You do like an amazing job in your book of like, pointing out so many great examples.

So there's 2 ways I think I sort of like to go here. Like, 1 is. I don't know which one I, which is better, but [00:32:00] both you know, one is, and I'll let you choose. One is, there is this sort of, this sort of backlash now about, like, we're going to run out of data. And I think you talk about it a little, like, how much data we have.

And then there's sort of the, the, the extension of that is, is the synthetic data going to pollute. You know, pollute this sort of thing so that like the winter comes or whatever. And then I think the other point that, you know, so I went back to your book and, you know, sort of tried to pick out some points about the, are there some, I mean, there are potential, not in a terminated sense, but ethical problems and more in like, I think, research and learning or, you know, or these, so those are my sort of two and I'll let you pick the order of how.

Do you

Erik Larson: mean, how do you mean in terms of the ethic, do you mean?

John Willis: I think there was points that you had made about like, you know, is this is this sort of work against us intelligence you know, impact of innovation or something like that, or, you know, those sort of things. Oh, sure.

Erik Larson: Yeah. I, I do think [00:33:00] there's a continuing worry there.

So I, like, so I, one of the points I tried to make in the book and I'm now making on my sub stack is, you know, the, the, the, a lot of the discussion. In A. I. And this is historical. It's not just today, but it's been in the field really since the inception is that it's a replacement for our intelligence.

And so what ends up happening is, is you get these really influential Thought leaders like Sam Altman at OpenAI and, you know, Larry Page at Google has made comments over the years, you know, about this idea that the future of humans is basically machines. And, and then you see in cognitive science over the last 20, 30 years all the, all the research papers are about how we have cognitive bias and we're stupid and, you know, you know, right?

So like, it's just, this is a compendium of, you know, [00:34:00] Daniel Kahneman, thinking fast and thinking slow. And the follow up book was noise, you know, a flaw in human judgment. Like, these are the guys that win, win the Nobel prizes that basically all they can say about how we think is that it's pretty bad and computers can do it better.

Right? Right. And so I think when you get, like, it becomes a cultural question at that point, not even a scientific one, like, how do we make a future for our kids, for us, for everyone, if we don't even believe that we have the resources, right, to do this? And, and I also find it almost, I wouldn't say offensive, that's too strong, but, We're talking about products that companies own, right?

Like AI does not exist in this sort of ethereal, it's, it's literally owned by billion dollar companies. And so we're basically saying the future of the human race are these current stakeholders. I find that to be a little bit too reductionistic and simplistic. And so I'm always trying to break. [00:35:00] Into that, you know, I'm trying to bust that, you know, that that way of thinking about things.

And so we get people thinking like, what kind of future we actually want, you know? Yeah,

John Willis: yeah, you just remind me. I just read one of your articles on some stack that sort of the fat tail thing, which goes back to my, you know, that maybe the real innovation is in the fat tail, right? Yeah. Yeah,

Erik Larson: I mean, so one of the straightaway consequences of adopting induction as your go to inference mechanism is that you have a very difficult time dealing with exceptions and outliers.

Just almost by definition, you're going to be a better reasoner in the bell curve. So it's like in a cultural sense, it skews to the status quo. Right. The bell curve, like what's, what's, what's, you know, it's, what's already known is going to be always vastly more powerful if you're an inductive infer. And you're always gonna have a problem at these fat fat tail is just a distribution that goes away from the norm [00:36:00] and contains a large number of outliers.

So it actually, it's actually really important to understand what's going out over in that tail, not in the curve, but in the tail, but. Inductive systems have very little to say about that because they're very high or low frequency events. So you know, when, when you're, when a successful inference involves, high frequency, like observing something many, many, many times.

That's how you get your confidence that anything that's observed once, but has a huge impact is going to be an outlier. Right. So the problem is, is a lot of the world is like this, right? So, so yeah, I'm worried. I'm also, I'm worried about outliers because we need to be sensitive to them. And, and. The way that our marriage and love affair with, with machines right now, or make it's making that difficult for us to, to focus on.

John Willis: That just sort of goes back to the sec, the, the, the other question I was asking is like, if the. [00:37:00] It's the sort of bell curve of the, you know, the majority of this sort of thing, inductive, right? Is this, do we have this problem of, of like literally, I just saw a video of Erik Schmidt. It was a bootleg video.

It's pretty fascinating, you know, like him talking off the cuff and then they, they deleted it and now it's still out there, but he goes into like, like he's like scared to death according to him about like, you know, but looking for investment opportunities, but the running out of data, right? Like, like is at some point.

Like that sort of plays into this, like it is, well, I guess I'm saying maybe the, maybe there's a winter coming and the glass is half full. In other words, we get to, this is sort of Dr. Wood's sort of argument is that he called it like forensics instead of winters and summer, like they're like these fluorescents, like you think glimmer and they go, but there's still sort of impact.

And, you know, so maybe the half full version of this is we. The technology increases us to be more learned, more [00:38:00] understood, but we run out of data. And I guess part B is, is there a danger of the synthetic data? And that I, yeah,

Erik Larson: yeah, I, so there is a big danger that so they, they call it the data wall.

That's the industry term. Yeah, the data wall and it basically means we can't make progress unless we get more good, good usable data and we don't have it. So we hit a wall and then the other major, major problem is something called, Catastrophic data collapse. There we go. There's so many of these terms running around now, right?

So, but data collapse is, is this this idea that if we start using synthetic data, which is just synthetic data is data that one machine generates for the consumption of another one. So there's no human in the loop anymore. There's no, You know, english speaking person, 60 whatever years old or 15 years old doing this is now [00:39:00] just machines that you're giving each other.

Right? And so what we see the jury is still out on how much synthetic data we can use and still make progress in terms of scaling. But what we're seeing is that the Morrison that roughly speaking, I mean, there are, this is a big debate, so I don't want to pretend like it's simple, but yeah, Roughly speaking, the more synthetic data we're faced to use, the we're forced to use, the harder it is to make the progress like we did initially, right?

So we're getting this asymptotic thing where we just can't make any more progress. And the other problem is even is even worse than not, you know, not being able to scale which is to say that the the Actual models themselves are starting to get, you know, big failure points in them because they're the data, the quality of the data is, is not, it's not up to speed.

So you see like some very, very hard stop problems just with the [00:40:00] systems, you know, hallucinating, drawing really bizarre conclusions because they don't have good data. So the sensitivity. Of the data, like the absolute total sensitivity of quality data on this model of a I is very important to look at.

And frankly, like, we're almost out. I think 1 1 in the economists, they said they're projected by the end of 2026. It's going to be a hard stop. We're just out. The only thing we're going to have to keep scaling is synthetic data at that point because every, all the other ways of gathering data across the worldwide web are already basically exploited and we just, humans are not making data fast enough to feed the machines is a side is a, is a, is an accurate way of putting it, but it sounds sci fi, but it's actually accurate.

We're just not making enough. Data to continue to scale these machines. And so it's a big problem with the approach. Like you, you can't say that you've cracked, you've cracked, you know, the secret, the mystery of [00:41:00] AGI. If right, if we're going to just hit a wall in 2026 or yeah. And I actually think that is happening.

I think we'll see that happen. Yeah.

John Willis: Yeah, well, I mean, I'm, like, again, you know, I, I, you know, I'm trying to use a tool right now from Microsoft that's, like, called Prompt Compression right now. I'm pretty sure it's broken, seriously, because it is literally, you know, I've been trying to manage hallucinations through, like, vector databases, embeddings, training, re ranking, all that stuff.

So I, like, I don't worry about hallucinations as much because I think there's ways to mitigate that. Techno to start that things are useful, right? Like I would say to people, you know, if you're going to build a chat bot that tells the corporate headquarter, guess where to find best lunch, thrown it.

But if you're, you're giving advice to people who climb up on telephone poles, which is one of my clients I work with, what they could die. If they do the right answer. Like, I don't know, we better be really careful here.

Erik Larson: That's right. Yeah, there's a famous case of An [00:42:00] LLM that was used by L Air Canada, the airline and, and actually the LLM in, in trying to please the customer who was irate invented whole cloth, a, a refund policy that just didn't exist.

And so air Canada. Actually refused to pay it out.

John Willis: And then it went,

Erik Larson: you know, about the case. Yeah. So stuff like that happens, but I also, like, I do want to point out, like, this is one, I think, really interesting development that we did, we do have. Like, I do think there's positive things to say about AI, with these, with the attention mechanism, the transformer architecture, right?

Yeah. The one thing is that like, that it's, it's. Interesting. That very personal creative content now is really what it's best at doing. So what's funny is your, your computer now is better at writing a thoughtful [00:43:00] email than, than we are. And so I think that was just unexpected and it's very interesting.

I'm not sure sort of in big picture terms, how to make sense of that yet. Like why isn't the strength of it, actual creative, emotional content. Right. And so like where we get into trouble with LLMs. Is what we're forcing them to be logical, like machines, people, they're really good at it, you know, so I think that's very interesting.

I don't, I haven't plumbed the depth of that of that situation myself yet philosophically, but I do think it's interesting to point out. And I don't think there's any benefit in saying, Oh, this stuff is never going to work. And it doesn't you know, it's, you know, It's all useless. It's like, no, it's very useful.

It's very powerful. It's changing the world. Let's be honest. Like this is all happening. But we still have these bigger, you know, questions and they're still lurking there. Right.

John Willis: Yeah. Yeah. No. And again, I think the thing I fight with a lot of my sort of [00:44:00] enterprise clients is this binary thinking it's all or nothing.

Like we can't, and it's the same thing. They were cloud. They did this with, you know, like I can go back. To my whole history of technology, things that you either could do or couldn't do, and if it was the could do, it was going to do it all, right? I was at a conference a couple of weeks ago where one of my friends who I've co wrote some papers with at Cisco said, you know, people don't understand a 1 percent gain in productivity at Cisco is huge.

And there are companies that are saying they're getting developer productivity, you know, instead of just saying, hit the LLM button and give me the new basic product, you know, what if you're getting 70 percent productivity for developers on a day to day basis, right? At Adidas, right? The guy that runs digital transformation.

That's, you know, if there is an AI winter coming, probably is then like, that's like every other AI winter. We, we moved a little further, the needle a little further. Right. So,

Erik Larson: yeah, absolutely. I mean, I think it's interesting. You brought up [00:45:00] labor productivity, workplace productivity, and I don't know the latest on that, but the last time I checked there was still a confusion as to whether it was actually increasing productivity and that to be fair, though, that could just be because there's a lag in productivity numbers, and we haven't seen it yet.

Right? So it may, but if you have more up to date,

John Willis: I'm listening to the people, you know, the people that are on sort of digital transformation and so again, I don't have him. You know, like sort of statistical, I know there's the wall street version, which is very skewed right now. Right? Like if NVIDIA is not making billions of dollars, like the world doesn't even understand the difference between inference and training, right?

90 percent of my customers are not running training models. Like they're not building, they're not going to, they are basically making as far as they can on inference. Right. And so right off the bat. Wall Street conflates inference and training because they don't know the difference. Right? And then you've got like you know JP Morgan and, you know [00:46:00] Goldman Sachs, right?

Making these statements about, so don't even get me started. Goldman Sachs, like, literally pre GPT or transform a model used to have like, you know, 4000 day traders. Now they have like, maybe 30, right? And for them to say this stuff, this news, this technology doesn't work is because some trader tried to kill some application.

So there's that narrative. Then there's the sort of the, like, you know I think the naysayers, the hallucinations, you can't, we're not getting them the magic eight ball productivity. I'm listening to people like Adobe, Adidas, Cisco, John Deere. And they're telling me, if you look at it from a pragmatic standpoint, they're getting incredible development.

Developer experience, productivities and Yeah, and

Erik Larson: I don't, I I don't doubt that at all. I mean, I would, I would expect it would increase if you have the equivalent of a Mensa, you know, an iq, you know, 140 or 150 iq you know, intern right next to you that you can ask [00:47:00] for anything. The only problem, and so I do, I, I would, I, it doesn't surprise me at all that, that you, that we might see.

Productivity gains. But the the only sort of caveat, that I have is and this actually in my own experience using The system the system, the chat GPT is this, this kind of idea that I have to do extra work to make sure that it didn't say something crazy. Right. And so that, that's very real. That's work that I have to do, you know, right.

And so, and some of, some of the, It's very time consuming. Some of the fact checking that you have to do. You know, so, so there, there's an issue there. Yeah,

John Willis: I go with my stack overflow argument, right? Like most developers, you know, like, you know, Capital One is 20, 000. They, I've heard this from like large banks, right?

And Capital One pointed 20, 000 Java developers, right? And, and I'm like, how many of those people are like copying and pasting code [00:48:00] from stack overflow? Like, so when you get code from somebody, like, you know, like a great coder, you know, figures out like, even myself, I'm not a great coder. I'm a great citizen coder.

You know, I, like I, whatever I get back, like, and like, I'll go for the shortest distance to get, I'm not going to write like a web service from scratch. I'm going to go find out what is. And so I think in a sense, it's still sort of the same problem and maybe we're conflating the difference between what we've been doing one way.

Googling somebody's tutorial or stack overflow, and we just did. And, and, and it goes back to the binary thinking. Yeah. If you're magically believing this thing is intelligent and it's gonna give you perfect code, and then you start to implement it and then have to do a whole bunch of backtracking to, to fix it.

Yeah. Then like,

Erik Larson: yeah, no, I, I think that's, that, that's a hundred percent accurate. I mean, we're the ways that we are wasting time now. I, I would put your comment this [00:49:00] way. Like the way we are wasting time now is very similar to the way we were wasting time yesterday.

John Willis: You know, it's

Erik Larson: like, you can't get out of that.

Right? Like we're, we're the, once you have a lot of information, like once your information space is full of opportunity everywhere, so you can just grab code from I forget where you were talking about. What, what was the website?

John Willis: Well,

Erik Larson: once you have those opportunities for people all over the world to just say, like, I don't, I'm not going to like, you know, vet this and do all the due diligence is like, if this stuff compiles or runs and gives me the answer, I'm just grabbing it.

I don't care who wrote it, you know, and so once you're in that mode where you have like, all this, this huge information space. And this big menu, you know, then the LLM just becomes another piece of that. It's not, it's not a fundamental distinction. And I think actually, I do think that's right. I think people who are saying it's going to ruin [00:50:00] coding and it's going to run.

It's like, no, it's just another tool. It's another arrow in the quiver.

John Willis: It's a tool. It's an extension of. Of how, you know, I, on a quick one, I, I was trying to put together a tutorial for a client and trying to explain Euclidean versus cosine similarity, right? Like, and, and, and I was running this sort of code in an open source tool that literally seemed to give me the same number.

Erik Larson: Yeah.

John Willis: So I use this tool that Lily tries to fix bugs using like Microsoft Copilot, right? And I, it was a large open source repository. So I wasn't going to try to navigate spending a couple of days figuring out where would I find the library? Like I can read code, but I'm not like savvy enough to go spend a week trying to figure out how I find the actual library that's using.

But we opened up an issue. It told me exactly, it didn't give me the right code to correct, but it literally told me the library went in the library and realized the code is correct, didn't even have to run it just looking at the logic code. And then I asked chat [00:51:00] GPT, what would be a scenario where I'd get a similar or same number for the two?

And it said, if it was small enough and you, and they were both normalized, you get like bank to me, that that's the sort of like, imagine how hard it would have been for me to get that answer. Yeah, normalized like a lot of sort of like AI intelligent coding intelligence. So, I

Erik Larson: think that's right. I, I think there's more of a danger, frankly, if you put on the creative writing hat.

So, like, my example was an email. Right. Like, right. Yeah. Yeah, sure. So it just sort of like, if you're looking for the perfect diplomatic way to say something to your boss, I would run it by Chad, GPT. It's very likely to be better than, you know, what Joe, or you know, Mary you know, comes up with on their own Who, but you know, maybe, maybe not, but I would try it.

I would look at it and, but the problem is like, if you're writing a book, this is where I really see the problem. If [00:52:00] you're a real, quote, unquote real writer and you actually wanna. You want content that really hits hard, really punchy, really interesting. You get that bell curve problem with it, with LLM, you just can't.

Like I've done experiments where I said, like, let's write a book called Silicon dreams, and we're going to tell the story of the development of the worldwide web. In the 21st century so far. And like you start going, you start going and then very soon it, you know, looking at the, looking at the, the, the pages that you're getting, it's very quickly apparent that this is just really averaged,

John Willis: you know,

Erik Larson: it's, it's, and it's always saying the same things.

Like we, we delve into this problem and navigate the complexities. And so it's got this real kind of saccharine skim, the edge way of writing prose, and so I think. For me as a professional writer, that's not a problem because I'm not interested in somebody else speaking for me, you know, but I think for like maybe an education, that's going to [00:53:00] be a, an issue where we just get, we get a whole generation of, of, of students who basically just turn in something that the professor has to take because you can't prove, you know, who wrote it or not wrote it, but nobody's actually.

Advancing this part of human culture that's really central to our identity, which is writing. Yeah, no, there's a problem with that actually. Yeah.

John Willis: Yeah. No, I, you know, and you, we'd hope as writers that like, you know, and I, I do believe, I mean, it goes back to that fat tale thing, like when I'm writing a story, I do a lot of research and now I, I do way more research on online.

You know, then, then I, then I had capability. So it's accelerated. I mean, literally, you know, this book probably would have took me three years. It's only going to take me about a year. Right. And, and, but at the end of the day, it's like, I was telling you that, you know, the Walter Pitts story, like the, like AI is not going to write a goodwill hunting version.

It's or tell me that [00:54:00] they used to take road trips to Mexico. And probably, I don't think I'll ever find this anywhere that they probably like Kerouac and those guys would do an asset, you know? So, you know, you know, so yeah, that that's the, the, the, our ability to tell stories and stories is actually that I think that you said something early and I was going to comment about like the, one of the clear missing pieces, even when we're doing it, but you know, the people do investigation on plane crashes and babies died in the hospital, right?

Yeah. They will tell you all that the Dr. Woods and the people are talking. They'll tell you it's all about listening to stories. Yeah, the metal models, right? Like five people have five different narratives and you sort of look for the yeah so I think it is the story and like hopefully that like Writers are not obsolete because any sort of Tom, Dick and Harry can basically write a book on any subject, right?

I don't think that's gonna be the case

Erik Larson: Yeah. Yeah, absolutely. I like, I think it will, that's one of those, these cases where you get this unanticipated consequence of you have the bump in the power of [00:55:00] the technology, but it ends up proving that the human way of doing things is actually distinct and unique and so on.

You know, I think for in the case of writers, I don't, I don't see that being a replacement model. I, you know nobody's going to keep reading Large language model text and calling it literature. Like we're just not going to do that as a society and writers are not going to accept that, you know, for their profession.

But like I said, like when people don't care as much when it's emails and so on, on the one hand, it may be product or like with code snippets, it may be product productivity enhancing and fantastic. Right. Why not? Right. We don't want China. To be more productive than American, you know, labor force.

Right. So fantastic bully for us, but it's just, it's, it's kind of a question of, it's almost case by case, like where, where is it really advancing us and where is it kind of dangerous and we better

John Willis: be careful. Yeah. So I was gonna, I know we're a little bit close to the hour, I [00:56:00] was gonna ask you to put your science fiction hat on.

I'm sure you're like, you're, you're, I can, there's no doubt you probably are well versed in science fiction. And like, like, is there a what's next? I got two questions so we can close it up on this, but one is what are you finding incredibly interesting now, you know, beyond sort of like, or just perpetuating this, this really good, you know, Idea of how we need to think about these things.

But is there a sort of a, like, we do get a winter, like what, what, what could happen is like in the, the foundation trilogy or like, is there any, like, and now I'm going crazy, but what the heck?

Erik Larson: I mean, I'm the wrong person to look for really good science. So like, I, I think that most of the energy in, you know, in the debate right now is around alignment issues.

I feel like that's, that's the kind of, hot button issue [00:57:00] when we talk about what the future is going to be like. And I think that, you know, this, this idea that we can, we can have systems that we build in autonomy into the systems and then that very autonomy. You know, you know, basically just to put it bluntly screwing us, right?

Like, so we have systems that decide that they're going to crash wall street because for reasons that are unclear to us that, that was the perfect move on the chessboard and that, you know, but, but my position on this is very boring. I don't think that's going to happen. We engine, so I'm an engineer. I, you know, I'm, I, I've built systems for 20 years, over 20 years.

And. We always build systems to have boundary constraints like that. Any engineer, nobody's going to build a system that has a magical connection to wall street. Like, like there's the engineering process. That's actually never going to happen. The only way you can get results like that is if you become sci fi speculative and say, this [00:58:00] machine now has motivations, right?

Like it literally has its own idea. And I don't see how that can happen. Right? Like, I just don't see that, how that can happen.

John Willis: So now I've got the, so now the, like, again, like it, like, is there, there's two things I think about this sort of what is the envelope or the paperclip thing, right? And then, but then there's the, when you talk to somebody, you know, I get periodically I run into sort of DevOps or infrastructure and operations people that are running, you know, the, the backbone of Microsoft or you know, the open AI people, or I know people have interviewed there and all, and, and like some of the people that sort of built it, maintain it will say.

They don't understand how it works. Like, and even you start listening to like Hinton and Likud, like some of those guys are even saying like, we don't really know. Like, yeah, that's built it. Don't know. So I, I, like, again, I don't know, but I, I. Optimistically hope that you're right, but is there, you know, that you make me think about the sort of the paperclip or combined to [00:59:00] like, we don't know how this thing's actually working that does.

Yeah. I

Erik Larson: mean, so the point I made in the book about the paperclip example, if you're listeners, it's the idea that you ask a. AI system to maximize output factory output of paperclips, right? Like we just, we need a lot more paperclips. So figure out a way to optimize that function. And it, it, it ends up using the, the the CE, the molecules in the, in the CEO of the workers and everything to make, you know, you end up with a universe full of paperclips, right?

And so like that was posed by Bostrom actually way back in 2001. I think that that example, it's a real. It's a very longstanding example of unintended consequences. And this actually has a mythological component with. You know, the the German scientist, philosopher Gertz had talked about the magician and the sorcerer and the music, right, where I think it's called the, the sorcerer and the broom [01:00:00] and the point of the story going all the way back in, in time, right, is that you beware of unintended consequences, right?

And with that story, the The apprentice asked the, the, the broom had magical powers and I, I can't remember how it worked, but it ended up that it destroyed the entire kingdom, you know, for, you know, from small from a small request because they, the person didn't understand the power.

John Willis: And so

Erik Larson: the person doesn't understand, the person doesn't understand the power is a, is a mythological component that we use to keep ourselves And so I think that the alignment kind of draws a lot of energy and kind of inspiration from a very understandable and very necessary story that we tell ourselves about unintended consequences.

In the case of machines, I think it's slightly misconstrued somehow. It's actually difficult, [01:01:00] John, for me To get to the root of it, right? Like to really nail this issue and say what's wrong with it. It's difficult. It's possible that we could design autonomous systems that had absolutely catastrophic consequences,

John Willis: but

Erik Larson: the way that it's discussed is slightly.

Speculative and

John Willis: logical, really. I mean, at the end of the day, right? Like you know, I've worked at large banks. They got two factor kill switches to turn all the route border routers off when trading goes nuts, right? Like, yeah, yeah. Electricity counts in this equation, right? Like, so yeah, yeah. I mean, the

Erik Larson: power, we're not going to seed the power grid, you know, I mean, there's, there's right.

So I, like, I, I think that it's interesting that I, I, I'm a fan of talking about in alignment, but I want to get it out of the sci fi space. I want to like, let's talk about it in a real sense that affects us. Right. Yeah,

John Willis: I agree with that. Yeah, I mean, so the book, I, you know, I like, I highly recommend, I've been [01:02:00] recommending.

Like the hell out of the book. Cause I think it's, it's a great, especially people that come from my sort of world who are very sort of, not to say other disciplines aren't grounded, but like the people that's in this run infrastructure for the largest banks in the world. They run, you know, they run some of the largest hospitals, insurance, retail companies, you know, and, and their responsibilities are like incredibly high, right?

So they're not the ones that jump out and run into like, let's just Make this widget solve everything, right? So I think your book is going to be really help for them when they're talking to the CIOs and CEOs about the groundedness of like, you know, get, get past the stupid mythology, but not like, so that's right.

Erik Larson: That's right. Yeah. Well, I hope it has that has so far so good. I've got, I've gotten a lot of really Fantastic.

John Willis: It's a great book. I mean, it's definitely you know, you, you, you know, like, I have a good friend, Mark Burgess, who was you know a quantum [01:03:00] physics guy who literally became a computer scientist guy who owned famous open source projects.

And I, I, I had to read his book a couple of times, but you know, hey you know, that's that's, that's good. It's good, good, healthy Yeah, well it's been a pleasure. I hope you had as much fun as I did. So, yeah, it

Erik Larson: was great. Thank you for having me on. I actually have a hard stop, but I think we're just almost perfect right now.

All right. Yeah. Wonderful. And, and Let me know how your book turns out.

John Willis: Yeah. Yeah. I, you know, I may send you an early copy since you'll probably have a lot of background in the history and all. So I want to make sure I get like, I want to be able to have my mother in law enjoy it, but I want to have people who have deep history and to say, yeah, John, that's a little bit wrong there, you know?

So, that get that right. Yeah. I'd be happy to. All right. Sounds good. It is a pleasure. Thank you so much for coming on.

Previous
Previous

S4 E22 - Dr. Jabe Bloom - Navigating the Myths and Realities of AI with Pragmatism

Next
Next

S4 E20 - Dr. Jabe Bloom - Navigating Complexity with Pragmatic Philosophy