S4 E22 - Dr. Jabe Bloom - Navigating the Myths and Realities of AI with Pragmatism

In this episode of The Profound Podcast, I sit down with Dr. Jabe Bloom, a researcher and expert in systems thinking, AI, and digital transformation. We explore Eric Lawson’s book The Myth of AI, tackling the contentious debate around artificial general intelligence (AGI). Dr. Bloom offers insights from his dissertation and divides the ongoing discourse on AI into two camps: dogmatists and pragmatists. Dogmatists believe AGI is inevitable, while pragmatists focus on the practical impacts of current AI technology, such as large language models (LLMs), and how these will reshape businesses, education, and society.

Throughout the episode, Dr. Bloom explains his framework for thinking about AI, touching on proactionary versus precautionary approaches to its development and regulation. He also draws connections between these ideas and W. Edwards Deming’s principles, especially around abductive reasoning—a concept that links back to Dr. Bloom’s past discussions about AI’s potential in problem-solving.

The conversation takes a critical view of AGI's feasibility, with Dr. Bloom emphasizing the current challenges AI faces in replicating abductive reasoning, which involves making intelligent guesses—a capability he argues machines have yet to achieve. We also dive into examples from fields like DevOps, healthcare, and city planning, discussing where AI has shown great promise and where it still falls short.

Key takeaways from the episode include the importance of addressing present AI technologies and their immediate impacts on work and society, as well as the ongoing need for human oversight and critique when using AI systems.

Transcript:

John Willis: [00:00:00] Hey, this is John Wills again, The Profound Podcast, and, or maybe it's called the Dr. Jabe Bloom Show. It's just I think most people are pretty like we're pretty stoked about the last couple we've done and you know Jabe is just an amazing guest to get on your podcast. So hey, Jabe, how's it going? My friend?

Dr. Jabe Bloom: It's going pretty well, man

John Willis: Yeah

Dr. Jabe Bloom: Interesting conversations with people and that's been fun.

John Willis: Yeah, this AI stuff, man. What's going on here? We'll have to figure this stuff out, huh?

Dr. Jabe Bloom: Yeah, definitely.

John Willis: So, last time we talked about, like, doing a book review ish, whatever. You know, I, I I listened to the myth of Eric Lawson's myth of AI.

I had Eric Lawson on recently, you know, we did a podcast on you know, your thoughts of the importance of abductive reasoning, which I think got a really good response. So just a heads up, if you have, if you [00:01:00] listen to this one, you probably want to go back and it's like two or three podcasts ago with Jabe, we, we really dissect Jabe's research, why it was so important to him.

To understand abductive reasoning just a heads up, Jabe was really helpful in me, you know, when I was writing my Deming book, of helping me sort of understand that, you know, as like how it might apply to Deming. And we, we did like, he, he, he knocked it out of the park by, you know, doing the holy grail of tying Deming to abductive reasoning.

So anyway, the focus here then is Really the Eric Lawson's book, The Myth of AI. So, Jabe, I'll start off with the you know, the thing I, you know, my meat and potatoes version of this thing about the myth of AI is sort of an anti AGI argument. You know, I, it's been interesting that I wasn't really interested in this.

And then I think I've told you, and I I told Eric Lawson in the podcast, you know John Allspaw, you know, when I told him I [00:02:00] was writing a book about the myth AI or about, I'm sorry, I was writing a book about the history of AI. He said, Dr. Woods wanted to talk to me and I got briefed about his stuff.

I mean, it really overlapped really well with you know, with Eric Lawson's book that I was actually listening to at the time. And so I, I've kind of come to this, he had a real strong opinion about sort of you know artificial general intelligence. Right. And then, you know, Eric Lawson has a whole very sort of lengthy, very complicated book.

on his argument. And it was, it was pretty, you know, it was an intense book, right? Like it was, he covered a lot of stuff there. Anyways, so to get to the point as I ramble I see, you know, the way I look at it, there are sort of three groups when it comes to this AGI, you know, as you learned me well about you know, what's his face is Pierce's, Things 3, so I love that.

I think there's [00:03:00] 3 categories. There's the Ray Kurtz worlds of the world, right? And, you know, so they think that. It's like they have a date, they have a year and a date when AGI is going to happen, and he does math, and I've had friends, really smart people, say, hey, John, let me show you, and I don't want to see the math, but, and then, yeah, then there's you know, Melody Mitchell is she's, she's too busy to come on my podcast, but she wrote a great book on the history of AI, and probably my favorite so far, the ones I read, and I heard her in a podcast with Lex Friedman, and she said, probably 100 years, like, it's going to happen, she's going to hedge in the bet, And then what I call the anti AGIs, which are Eric Lawson, Dr.

Woods you know, and, and so, like, All of 'em can make a great argument. That like, okay, you're smart. Yeah, I agree with that. I agree with that. I agree with that. So, so I'd start off with that that sort of like, I don't know if I wanna ask you where do you fall or just how would you [00:04:00] dissect everything I said?

Dr. Jabe Bloom: So I talk about this a little bit in my dissertation and Oh, cool. One of the things I talk about in my dissertation is, it, it, it's, it, so it's very related to what you, you're, you're saying. So I, I actually divide the current market of ideas about AGI and AI in general into kind of dogmatists and pragmatists.

So there's dogmatists who kind of are the people who are going to say, listen, like, there's no reason to fight about whether AGI is going to happen. It is going to happen.

Any

Dr. Jabe Bloom: questions? So then, so you can, like, again, you divide dogmatist and pragmatist, and then you you say proactionary versus precautionary.

So, proactionary people I'm sorry, precautionary. So, proactionary people are people who are like, we gotta, we gotta do it. We gotta do something about it right away. And they tend to be, so, like, a, a prime example of this would be, like, Art [00:05:00] Andreessen's current position, which is something to the extent of AGI will exist.

Thank you. We are in a multi state multi nation state competition in relation to it. Therefore you have to build the nuclear power plants as quickly as possible. Because if the United States doesn't power the, doesn't give us the electricity needed to do this AGI thing, China will beat us to AGI, you know, whatever's going to happen after that is a, you know, is a problem.

And so, so that is a proactionary reaction to AGI, right? From a dogmatist, someone who assumes that AGI is just going to occur. There are other people who are like, kind of the AI ethicists version of this, right, which are like, also accept that AGI will occur, but they argue that what needs to happen is regulation in order to control what happens to the AGI.

[00:06:00] So in other words we need, so sometimes it's called the alignment problem. We need to, we need to regulate AGI so that it will, in fact, produce Aligned outcomes that you know, preserve human value, right? So that that's another reaction. So it's it again. It's a dogmatist someone who believes AGI will occur But they they're trying to figure out how to do it safely as opposed to the opposite side which is You can go down to the bottom and you can say there's pragmatists.

And so I think of AGI, I think of artificial intelligence pragmatists as people who are like, Hey, listen, like we don't really need to have an argument about AGI. There is a thing right now, we call it AI, whether it's intelligent or not, like, I don't really care. There's this thing that's occurring right now LLM, LLMs, you know any [00:07:00] sort of machine learning, blah, blah, blah.

This is clearly causing issues to occur in our, in our businesses, in our society, right? This is clear that this is going to change Organizational structures is going to change employment structures is going to probably change what it means to like, go to college and learn things, right? So you know, there's the radical version of, like, you don't have to go to college anymore.

And there's the less radical version of you probably have to learn how to work with an LLM as an academic, and you probably can't not learn that at some point, right? So these are people that are pro actionary and and precautionary about those things. One of, one side would be on the precautionary side would be saying we need to have better rules about how to use the currently existing AI in business, academia, blah, blah, blah.

We need to regulate, we need to figure out what it means to be ethical. We need to figure out, is it ethical for AIs to consume [00:08:00] other people's research, other people's works of art and reproduce? Those are all questions that pragmatic, precautionary people in relationship to AI are, are trying to address, right?

Like they're, they're not saying like, we, I don't have to wait for this magic God AI to occur. There's this thing already that's already causing, we need to figure this out. And then there's a proactionary people who, you know, are Probably more related to like, how do I make this work in my business?

Like, what do I need to train my people on? What type of resources do I need to get? How do I change the processes and regulations in my business? Because I believe that this thing that already exists is going to change the nature of my business relations and stuff like that, right? So I think that that is one way of kind of expanding your, your AGI concepts into like, trying to say, like, We can think about AGI, and it's probably important to think about AGI, although [00:09:00] AGI has been proposed and talked about for a very long time.

Part of the book, of course, is roughly the promise of AGI has been around since before I was born and has always been promised to be Just in the very near future and it's never actually produced any anything like AGI. And then below that, you can have a secondary conversation and well, there's actual problems that are, that are occurring.

And again, I don't mean problems as in like, LLM, LLMs are by their own and in abstraction problematic. I mean, Integrating LLMs into society, into the technology, into our organizations is a problem that has to be solved. And it's not clear what the answer to a lot of those questions are. I think it's probably more useful, I would put myself in more of the pragmatic side, where the questions for me about AI are primarily about how do [00:10:00] we deal with The fact that what is here is problematic, is, does have problems that need to be resolved.

Because you know, I think differently than maybe crypto or metaverse stuff, the previous two obsessions that we had in IT. The market investment in AI is going, if it doesn't work, it will still take a very long time to unravel the investments in it. So it's a problem that's going to be here for years.

Whether or not we like it or not, we've got to figure out how to kind of grapple with it and figure it out. We've got to figure out how to make it valuable. Does that does

John Willis: that make sense? Yeah, yeah, no, I think there's a lot of, I mean, I would say that the, the unraveling is really part of what like, you know, crypto like isn't dead.

I mean, it's, it's, it's, and it's, you know, it's, it has substance, it has substance, right? And, you know, and, and the metaverse, you know, has been sort of. [00:11:00] That one maybe has gotten a little collapsed, but I don't think it's dead either, we'll see. I mean, that stuff's hovering and still gonna come to fruition. I think, I guess, I, you know, and I, again, I didn't, like, I'm sort of like what you said when you talked about pragmatism.

Pragmatic view of this, if you will, is I don't personally I've been challenged by, you know, Dr. Woods, right? So, okay, you take that challenge, right? And I wasn't even going to cover this in my new book about AI because I was like, like, I'm like, it's happening. Let me figure out where it works. I, you know, there's a lot of other things I can worry.

I was at a wedding this weekend and an old friends of mine's sister came out and said, you, you've been talking about AI and stuff, you know, what is it going to do? And I'm like, you know, and their name's Benji. I'm like, Benji, like, there's a lot more things to worry about than AI. Like, you know, and I'm not even talking about politics, but like, but like synthetic biology, right?

Synthetic [00:12:00] pathogens. I mean, like, there are there are some really freaking scale. You know, he talked to Josh Corman about critical infrastructure and water supply. All right, so so, so I, I didn't really care except that. I thought the debate got interesting. Yeah, like, that's like, I get really interested in, like, when really smart people start disagreeing, that just tells me my nose goes there like a detective.

Right? And I guess that's where I still, you know, that's the part I want to kind of cover it in a late probably the last chapter of the book is not like, I really do. I mean, I do care if I does bad if I happens a bad things happen, but I don't really think about that other than. I think the debate is really interesting, and therein lies, like, why does, it seems like Larson is being less of a pragmatist.

And more of you know, more of a sort of you know, what's the word I'm looking for an evangelist, if you will so I would almost to some extent I'd say that, that [00:13:00] I mean, there's been a couple of different approaches to attacking AGI conceptually, like different conceptual approaches and, and Larsen's almost like a realist.

Dr. Jabe Bloom: So part of what he's saying is that, like, the math doesn't math. Yeah. For him, right? Like it's that it's not a question. He's almost like it's not a question of like consciousness or any of those questions. It's that the math doesn't work for him, right? Like that. And part of what he's saying about why he brings up like rationality forms of rationality is he's basically like computers could do deductive thing can do.

Computers can kind of do abductive. Sorry. And yeah. Inductive things, but it can't they can't do abductive things. You just can't. It's not a thing that computers can do, right? And there's some interesting things we could talk about around that and but it's important to note that basically he's saying that the All all lms the things that he's seen so far Don't have any approach [00:14:00] to this kind of like, make good guesses.

And one of the arguments there is like, So, I have different arguments for how I would think about that and why that motivation is more or less Not

John Willis: to summarize Dr. Woods without getting him on the podcast and, you know, I'm still not done with the sort of research that he dumped on me. But like, I believe that is his argument as well.

It is very, it mirrors very close like the math doesn't math. These are just computer machines and the true, you know, and what he says is we probably could have in the 60s. Taken an abductive reasoning approach in computer science science, and we didn't, and so now we wind up today without, so the second, like the

Dr. Jabe Bloom: second primary test.

I, I thought Larsons was interesting because I was reading it and I was kind of interested in why he would write a book about attacking a GI in general, because there's been a couple written over the years, Uhhuh . So I was, I was super interested in that. His argument wasn't the argument that I've [00:15:00] seen presented so many times previously.

So the, the, the classic. MIT argument against AGI is from Dreyfus, and so Dreyfus is a phenomenologist, right? So I'm a phenomenologist. I believe in phenomenology.

John Willis: You're going to have to, have to break

Dr. Jabe Bloom: that one down. So, so phenomenology is the study of experiences, phenomena. But the way that you experience something so like one of the way you've seen me talk about apples, right?

What i'm what i'm saying is what we're studying as a scientist What we're studying as a philosopher is not the apple but our experience of the apple We can't ever actually directly experience the apple and there's lots of reasons for that and you know, we can talk about that but One of the things is just to argue, like, you are limited as a cognitive being, you don't have infinite resources to look at things, right?

And so there's a trail there. But phenomenology then kind of comes through a series of refinements over the years. So you can start with Kant, [00:16:00] who's basically like, All experience is in time and space, so you don't have access to reality directly. You only have access to reality through a concept of space and time.

Like you just can't, none of the things that we say to each other make any sense without a temporal frame and a spatial frame. We just can't talk to each other, right? And one of the problems with that eventually, you know, in the future, if you're, if you're phenomenologists argue, it's just too universal, it's too generalized, like it's true, but it's not constrained enough.

Basically, it doesn't bound your cognition enough. It still allows for some idea that there's like a universal way of seeing the world. That's only based on space and time. Right? So future phenomenologists start getting a little freaky.

John Willis: So this is where I'm, you know. There's a point of when I have conversations, it will show, well, it's not really ignorance because you're on a different level.

But so is it the point that that it is that because [00:17:00] we like the only 2 are called primitives that we can understand is space and time. And so, therefore, there are a lot of other things that we don't know about that sort of limit our ability to truly understand what an apple is.

Dr. Jabe Bloom: Yeah, exactly. Or, or, you know, like if we went and talked about like relativity and we could have a really simple conversation where I could convince you that if we're far enough apart in space and time, your future could be my past.

And, and that's just relative, like, and, and so our concept of time doesn't allow for, that doesn't make any fucking sense, right, it doesn't, but, but, but the underlying mathematical things about how observers interact with each other, the kind of concept of relativity, that's what it says. It says that like, The order of time is, is perception based.

It's based on the perceiver. It's not based on some sort of like universal global timeline where everything is, has, is ordered in the same way it doesn't occur [00:18:00] like that. Yeah. So, so that means from Kot's perspective, well before Einstein, right. But he's basically saying that. The idea that things occur, so cause and effect, so the cause comes before the effect, means that there's gotta be an order to things.

There's gotta be a temporal order to things in order for us to talk to each other. Otherwise I can't say that this caused that. I can't say that this comes before that if I don't have time, right? And space is just, you gotta have places to put things. Everything can't be in the same space. It's got to be kind of distributed and you got to say like it's over there.

It's over here. It's near me. It's far from me. That's right. So you got to like fast forward and there's a couple other phenomenologists, including like who's rule, who then kind of starts talking about how, how. Kant's understanding, causes a problem for science in general. And so [00:19:00] basically, the really rough and, and problematic way to say this is that science tries to eliminate subjectivity out of the system.

But if you, if you follow Kant, you can't get rid of the subjectivity because your perception of space and time is part of your perception. Like you get in this loop, you can't get out of it. So in fact, trying to illustrate the world as being lacking in subjectivity is itself. problematic. So one of the things you could say, again, very crudely and, and phenomenologists will, would hit me if I said it in a, in a more sophisticated space, but is that the elimination, you can't eliminate subjectivity.

You have to have subjectivity in your perceptual thing. And so then phenomenology could roughly be described as observing your subjectivity within a scientific frame. So it's basically introducing subjectivity back into the scientific method, right? And that's roughly what he's talking.

John Willis: So take us to, and you're saying that Dreyfus is then sort of [00:20:00] the, sort of the MIT version of anti AGI.

Sorry, just one more and then I'll go there.

Dr. Jabe Bloom: You get, Heidegger comes after Husserl. Heidegger is Husserl's is Husserl's student. And Heidegger basically then says it's not just space and time. It's not just subjectivity. It's actually that you occur in a certain place in time. So like you have. a history, right?

Like, you and I are having a conversation because we exist in the 2020s, not, if we wouldn't have this conversation, it wouldn't make any sense in the 1800s. So there's like a, there's a set of worldly projects that are occurring and we are part of those worldly projects that allows us to have conversations with each other.

So it's not just subjectivity, it's like this, Whole social structure that we get plopped into when we're born. And that is likes the space and time idea that is shaping what we can talk about, what we can point to. That's so the phenomena [00:21:00] that we're experiencing then is just not universal. It's specific to a historical moment.

So maybe that's roughly what Heider is trying to get to. There's a couple other things he wants to talk about as well, and we move forward. Last guy named Merleau Ponty, and Merleau Ponty basically argues that it's not just history, it's not just space and time, it's your body. So your experience of the world is, is is from a bodied experience.

And roughly what he means by that is like, what you pay attention to for like a good period of your life has nothing to do with concepts. It has to do with like opening doors, getting food, body stuff. So what you think about a lot is not with your brain, it's with your body, right? And so the experience of the world is informed by things that satiate your bodily experience, right?

All of the above, right? That's right. You could think of it almost like it's what we're doing is narrowing down. There's a last one, which is that that your phenomena are [00:22:00] you know, there's a lens of power on your phenomena as well, which basically means that you experience things based on where you are in a power structure.

Like you experience things differently than someone else who's maybe from it. Right. And so all of those things end up being ways to talk about phenomenology. And what you can start seeing there then is that. None of the experiences, none of the phenomena, the ways we experience the world are universal.

They're, they're, they're stuck in human subjects. Yeah, they're stuck in the fact that you have a body that's in a historical place that's within a conceptual frame and you can't get out of it, right?

So basically, Dreyfus's argument is that, that he's looking at these in the 70s, he's looking at these people writing these initial AI applications, it's like you can't, the computer doesn't have a body, and it doesn't have any projects, so it doesn't have, so for Heidegger [00:23:00] and for other phenomenologists, part of like your understanding of who you are is that you're doing something, That you're involved in a project.

So you're involved in writing a book, right? So that means that the things you do make sense only in relation to this future state that you're trying to achieve, right? And basically the argument Dreyfus makes is the computers don't have any future states. They're not involved in projects. They're not trying to resolve something.

They're just trying to like reorder symbols. And that's not about the future. That's about reordering the past to like fit into a present concept or question, right? And so one of the ways to argue about that, again, that you and I have talked about a little bit, is that all of this informs erotetics, which basically means phenomenology, Informs what questions you can ask.

And in the sense that phenomenology is temporal, in the kind of bodied Heidegger concept of it, those questions are all about what, what are you [00:24:00] trying to do? And those questions are gated by like, your life projects, right? And, and Dreyfus basically argues that, that computers don't have any of that.

They don't have bodies. They don't have a sense of time and space and they don't have a, a, a historical present. They may have, like, access to all past history, but that's almost like a complete lack of any linear sense of movement or temporality, because it just, it's all equally valid to them, just all a big glob, yeah?

So he would argue, I guess nowadays he might argue that to have Any sort of AGI, and you can see this in some projects because people still talk about this, the, the computer would have to be embodied in some way. It'd have to be, it'd have to exist in time and in space to be cognitive. And then it would have to have some sort of temporal experience.

It would have to have some sort of [00:25:00] sense of its own history, where it, where it currently is and the relationship between those two things and some sort of future state that it's trying to achieve. And so again, Lerman doesn't talk about any of these ideas in his book, because they're not, he's, like, the mapping doesn't match.

And this is much more about, like, general intelligence, the type of intelligence that humans have. Although, I guess we could have an argument about what general means as well.

John Willis: Yeah, I mean, that's almost, that's the underlying sort of stupid part of this. You know, what is really the definition of general intelligence, right?

And I don't think there's a clear answer to that. So it's almost not worth debate. You know, yeah,

Dr. Jabe Bloom: I mean, they're basically like, he goes down the, I argue it basically that all the, all the AIs that we currently see are, are. Or task specific, basically, right? There's symbol processing. He

John Willis: talks a lot about the touring error, right?

Or

Dr. Jabe Bloom: the

John Willis: [00:26:00] error. So, what's interesting is, you know, again, like, I'm going on a limb here, but, I think that I think I, you know, I was being simplistic that, you know, woods and. And, and Larson's arguments were the same because he, you know, Woods definitely pointed out, you know, Hearst, but I, and I don't recall if he talked about Dreyfus, but I know for a fact he talked about like some of the studies they've done with senses and, you know, like he even said the three mil, I'll summarize this real quick, you know, that he, you know, he was involved in, you know, one of the first sort of grants to understand human factors after Three Mile Island.

I think he told me or in one of his papers that he gave me, which was They, they couldn't, they were observing, you know, the operators and they noticed every once in a while they went over to the other side of the room on a different console. I'm sort of summarizing this. You're smart, you probably know this.

And and they couldn't understand, so they asked the operators, like, why did you just do that? And they would say, I don't know. [00:27:00] And you know, and then they, now they, since they realized they didn't know, their form of observation, right? So they, and they noticed there was that clicking sound in the case of the clicking changed.

And so they didn't even know it was the sound, actually, that was driving them and he also talks about pilots having, you know, sensors on little buzzers on the thighs to sort of but yeah, so I think, I think his argument based on what you just said is probably a little closer there. But the, the, then I guess the, the question is, is, you know, does Lar Larson's is it like sort of lacking or, I mean, I guess that's not the right way to say it, but like Yeah, it's

Dr. Jabe Bloom: just a different argument.

I thought it was a really interesting argument, right? It, it his argument. would be from, again, to me, you might kind of pigeonhole him as a rationalist or a realist.

John Willis: Okay.

Dr. Jabe Bloom: In that, like, he seems to indicate that he believes there's certain properties of the universe that would [00:28:00] prevent AGI from emerging, right?

Like the, that these are not like mythological things, that they're not Problems with the current approach, but that there is no approach, in essence, to achieving the AGI stuff that he seems to be arguing against, right? At least that's kind of how I was reading him. Which is, again, just a different historical argumentative structure, right?

So he's, the, the abductive stuff that we talked about has been explored by Computer scientists for a while and part of why he's saying it doesn't work is because people haven't really been able to get it to work very well. It's not a novel thing for people to talk, try to figure out abductive thinking within computation.

It's just, it doesn't seem to work. Nobody seems to have found a good way of encoding for it yet. So

John Willis: in a sense, deductive and inductive. Well, inductive, like deductive, [00:29:00] like RK expert system, you know? That's right. Yeah. Inductive is the, the causal, you know? Right. Like we have probabilities

Dr. Jabe Bloom: and, and, and you could see from like woods's argument, what's problem with the deductive thing.

Right. And the, the really quick version of it is that tacit knowledge is very hard to encode within an ex expert system. So like the, the rough answer is like. You know, this is a, a bodied version of it, but if I asked you how to ride a bike. We can't create an expert system to how to ride a bike because part of what you're going to describe is what it feels like to ride a bike.

And that's not a logical deductive structure. It's an emotive embodied structure that has to do with the way your body feels. Like there's no way to rationalize about it.

John Willis: The balance all I guess that goes back to the, I, you know, again, I'm not going to try to make the argument for the pro is, you know, that's what I'm going to call it tough.

If you don't like it, people out there don't like it. But they, they would make, and I've heard a [00:30:00] couple of these, and I don't try to go too deep in this, but I think they would probably argue, like, the temporal and the space is sort of being solved and, and they would say that these things can be test oriented.

The place where I think they would have no argument is the body, you know, the sort of the, the sort of the, I forget the name of the guy in the funnel before you get to Dreyfus, but, but the sort of the things that happen in your body, the person you are, and like, I don't think there's any argument for that, but I think they, they, they do try to attempt to say that, you know, there are sort of, you know, the new word agentic, right?

And, and, and I don't want to say agentic is my answer. All right. Okay, but this idea that these things are becoming very powerful and sort of give them a goal do this. Here's the inference learn the incremental steps in the goal achieving. In fact, all the things that we're finding, even the sort of, I sort of laugh at this, like, you know, when people like the Apple paper says that can't do [00:31:00] math.

Well, geez, didn't we all know that, you know, day 1, but the way you sort of get them to sort of behave a little more. Is you get them to explain, you know, the, the steps that they take to do it. And by definition, so anyway, long story short, I think that would be sort of argument but the, but certainly I don't see any, or, you know, any sort of prominent for the sort of woods or drive this version, which is got to include sort of like the example.

Dr. Jabe Bloom: So the argument against a gentle thing being Heideggerian and temporal is roughly this, right? So, and this is the argument against Simon's version of AI. So this is like an old argument, again, against this way of thinking. So basically what Simon describes AI as, the way I try to Herbert, just for everybody, Herbert Simon.

Herbert Simon, yeah. The way that I try to describe Simon's concept of AI, Is [00:32:00] this if I have a pile of Legos and I have an end state, like a whatever, a little tower, what he's basically arguing AI is, is the, what are the, what's the most efficient set of operations to turn the pile of bricks into the tower?

That's problem solving. Does that make sense?

And

Dr. Jabe Bloom: that's what a, that's the AGI that you're currently describing. I have an end state in mind and I would like you to figure out how to reconfigure the symbols to produce this end state, right? That's very different than saying what should the end state be.

And the reason is, is this gets back to a specific objection against Simon, and it comes from a planning theory, and it's called a wicked problem. And the wicked problem is basically this, Simon's description of how AI works relies on what we call a well formed problem. In [00:33:00] other words, you have to be able to describe the problem well enough for it to be solved inductively or deductively.

If you can't do that In his, in his model of AI, there's no way to actually, like, produce the outcome because all he's doing is trying to say the current state and the future state are different. What are the operations required to take the current state and make the future state occur, right? You can't describe the future state well enough, you can't figure out the operations.

People who argue for the wicked problem argument basically say, yeah, the problem with most design problems, most of the most critical problems we have in society, things that are about like, you know, how people live and blah, blah, blah. They're, they're political problems and, and politics. Makes it hard to describe what the desired end state is because people don't agree on the desired end state.

So you can't make the [00:34:00] problem well formed. In other words, the problem can't simply be a description of the inverse of what the ideal is because people won't even agree on what the problem is in the first place, right? So the argument here is that you can't, you still can't really ask an AI what's the best way to run San Francisco?

Because Once it produced that, if you asked the people who live in San Francisco if it were correct, it's guaranteed large amounts of people would be like, absolutely not. That's not how we should run San Francisco. Because the problem isn't about some sort of deductive or inductive problem. It's about the fact that People have, again, their own projects, their own things they're trying to achieve, and the politics, whatever ones you, you choose to approach, will frustrate some amount of the people involved in their own projects, right?

So there's no way to get around this,

John Willis: and there's All right, so now I'm going to put a little sort of devil's [00:35:00] advocate stuff here. But isn't that how we, in general, solve problems anyway? Like, in other words, you know Like, you know, again so I'm going to go down two threads here, right? That was advocate is, you know, I, I, I think that these engines have this capability.

To do incredible like sort of like wicked problems and maybe maybe I'm not using the wicked problem properly, but like protein folding is a really interesting thing. And I'm not saying that I think that is more. This is weird to say that's more deterministic, you know, like there is sort of a deterministic way to sort of use network to figure out what how proteins react.

To get us to probably what is the sort of the cure for cancer. Yep. And, but, you know, as complicated as that is, it, it is sort of deterministic and probably not a wicked problem.

Dr. Jabe Bloom: But it

John Willis: being the fact that it can do those things, I was in a workshop just recently, And this interesting question came [00:36:00] up is, for a food manufacturer, could we use a neural network to analyze taste?

And so my answer was, you need to get somebody way smarter than me to help you do that. But, I think if we follow the path of what DeepMind has done, you know, to your point, going back to the Herbert Simon Lego thing, these things are incredibly intelligent. Of learning to learn to learn to learn to come up with things that you couldn't, you know, like you know, even the the classic you know, the breakout game, right?

That, like, 1 of the deep minds 1st you know, breakthroughs was they, they, they put the game to play based on the ball and the paddle. And the bricks and it figured out an advanced strategy on its own right going up the sidewalk. And so that is sort of the Simon problem. So now am I getting lost here?

I think the so going back to, I think he's these machines [00:37:00] capacity. The style of incredibly hard problems, and I don't know where we know that we know how far that's going to go and certainly given that the advancements made just in, you know, sort of under a decade. Yep,

Dr. Jabe Bloom: that's

John Willis: right. You know, from beating, you know, beating chess games to, to Go players, to now you know literally doyou know, sort of getting, you know what's his face you know, Dmytro Hasibas won the Nobel Prize, right, for, and for Alpha fold, right?

So

Dr. Jabe Bloom: anyway, so I think I so that my reaction to that is so this is you know I I don't mind doing pro simon arguments too. Like I think simon's Nobel prize winner super smart guy.

John Willis: He's got one too.

Dr. Jabe Bloom: Yeah, he's got one too. So If you go over if you go over to cmu or I got my phd, he's pictures all over the place I think there's two simon buildings.

Anyway So what so there's this idea in his In his concept, and I think I read [00:38:00] it in a unique way, but maybe there's other people who have thought about it this way, but So Simon has this idea that he calls bounded rationality, right? So we take the deductive inductive thing, right? One of the things that you look at when you look at deductive and inductive is that there's at least two really important parts of the resources required to do those things.

So one is the amount of time you have to calculate the answer. So again, and even in a deductive thing, so like really huge equations are deductive, right? So you still have to run a massive amount of computational cycles to answer some very completely deductive things. Equations is not free, right?

This is like the way in which cryptography works, right? It costs money to calculate certain things. It costs time to calculate certain things. You make the cost of the calculation long enough, people won't even try it. That's cryptography. So, one is the amount of time. And two is the amount [00:39:00] of computation available.

Right? And so one of the things you can say is, like, humans are significantly bound by these two things. One is, like, We do a lot of computation really quickly, but we don't do an infinite amount of computation. We only have a certain amount of computation. And two, if we're put in competitive environments, we have time pressures, right?

So again, game cycles and things like this means that there's a limited So you end up with these types of problems that AI will always kick our ass on. And they are Problems where there's significant time compression. This is like the amount of time actually matters in relation to the amount of computation you could do, the amount of computation you have available, and then the third one is the amount of data that you can load into the computation within that period of time.

Yep. And so this ends up being Simon's primary argument for why the U. S. government should invest so much money in computation, and it just happens, [00:40:00] just happens, that he is writing this while working at RAND during the height of the Cold War, And what, what type of question would you need to have that has significant time compression, needs lots of data and computation, would you be thinking about during the Cold War?

Should we,

Dr. Jabe Bloom: should we launch our nukes next? Right? This is a very high stakes problem. Problem

that

Dr. Jabe Bloom: he's trying to commit again. I have never read anything of his that's this specific, but this is what I believe he was doing He's trying to convince us government to spend all his money on computation because that can that can give them a significant intelligence advantage if they can compute more data in short, in shorter periods of time than Russia at that time, right?

So all the things that you pointed out, right? Things like protein folding, things like this, are all those kinds of things. I'm going to apply more computation than any one human can over a [00:41:00] sustained period of time, which is very hard for humans to do. We're going to load huge amounts of data in, and I'm going to, Massive amount of computation, right?

All those things are, are really good examples of how AI is very successful at looking smarter than humans. Chess. A calculator, right? I mean, a calculator. That's right. And so if, if the problem is well formed.

Right.

Dr. Jabe Bloom: The game of chess is a well formed problem. The game of Go is a well formed problem.

Protein folding having specific outcomes are well formed problems.

That's right.

Dr. Jabe Bloom: All those things will almost certainly take advantage of the computational power that AI is using right now. And, and it seems as if The kind of data structures and the processes and, and the ways that AI currently is approaching those things is beneficial for short cutting.

A lot of the [00:42:00] human effort to encode it. Right? So like the rough argument here in relationship to a GI is that the original first gen ai. The idea was I'm gonna, if I want to have an artificially intelligent doctor, I'm going to go interview a lot of doctors and I'm going to have the programmers write down what they say in code.

And then eventually I'll cover the surface of enough stuff that the doctor will become intelligent. That seems to not be a thing that can happen, right? Expert systems are pretty good at very limited ranges of things, but they almost always require human interpretation at the end of the day. AI things around things like you're describing or like around questions like, hey, how do I upgrade my code base from Java 6 to Java 7, which is again, It's a very deductive problem.

There's, like, maybe a little bit of, like, I have opinions about which methods to use because of my local culture, whatever. But [00:43:00] that's, that's, that's just grunt work in a lot of ways. So there's a lot of promise, I think, for, in, in the dev ops, concept of toil. It's just that toil becomes this much broader definition at this point.

It's anything that can be deductively calculated that humans are just doing manually right now. And I think there's just a huge amount of ground there that Simon's concept of AI, take the current state, this is my ideal state, you just do the math to calculate how to transform the current state into the future state, That's what that's what Simon describes as being design.

I think it's engineering. I don't think it's design, but all those types of problems seem to be ripe for the type of technology that we're talking about. I mean, I don't think anyone should be surprised in a way if you look at it that way. That if I gave you a nuclear power reactor and all the computation that could be driven by that nuclear power reactor, that you could [00:44:00] calculate very big problems very quickly.

I, that seems to be a natural thing. But but those are the types of problems that I think seem to make most sense to point this stuff at.

John Willis: So are you saying then the wicked problem, and I know this is not what you're saying, but like I'm trying to sort of, the wicked problem is like where there's sort of like, You know, I guess one way would be there's multiple answers and but then, like, could we, like, in the city, like, how do we make, how do we, you know, you know, sort of the problem statement is I want San Francisco to be a clean city.

Yep,

John Willis: that's a wicked problem, right? That's right. Because, like, like, all the things that would come into, like, well, you guys are all going to lose your job because we're going to get rid of these things. And but, but, but couldn't we I mean, I'm sure the answer is yes, we could use these sort of deductive inductive approaches.

Which is what humans do anyway, right? Ultimately. If the, [00:45:00] so the mayor says we're going to make this the cleaner city, not everybody's going to agree with it. We come up with some plan. We started tacking it and we hope that or, you know, in the aggregate, they hope that it's the best of all, you know, things.

And so, like, in that sort of, could you make the argument that I could actually do like humans in that solve the wicked problems by getting, like, best 3 answers.

Dr. Jabe Bloom: I, so I would say that the way I would think about it is that AI can filter the predictions, right? In other words, I think that if we do this, this is the type of thing that will occur.

We can create a model, the AI can really quickly create a model of the proposition and then calculate the potential outcomes, right? So you could get some things where you could have arguments that both filter, as in like eliminate certain propositions because they just seem to be inherently wrong based on math or deductive analysis.

But also you could get the [00:46:00] opposite, which is a I could propose solutions that no one has thought about before that may allow for a better way to move forward. But the critical thing about wicked problems and design problems in general, as opposed to engineering problems, right? So engineering problems is like, how do I build the bridge that we can drive a 15 ton truck across?

Okay.

Dr. Jabe Bloom: Design problems are iterative. And iterative in the sense that you're not just moving towards a solution, but that every solution changes the nature of the larger solution. It, it shows you that the solution that you wanted to get to is, is maybe not exactly right and we need to wiggle towards it, right?

And so the result of that is like, you could say like a and a, a classic example of a city. Getting cleaner would be like, should we centralize the authority or decentralize the authority for how? We enforce rules like should some neighborhoods be allowed to allow homeless people [00:47:00] or do all neighborhoods need to be subjected to some sort of police force that's going to indiscriminately remove all homeless people or something like that, right?

That's not, that's not an answer that you can get. And AI to provide you because the question then is like, not even about homeless people. It's about how empowered or disempowered does any neighborhood of people want to feel in relationship to this? Some people, the answer is, I don't want anything to do with that.

I wash, wash my hands. I'm perfectly happy for the cops to come in and do whatever they want in other neighborhoods. It may not be like that. They may be like, we're all poor and we're all just one step away from being street. So we don't want that at all. Yeah. So do you see, like, there's like, like eventually tic tacking as opposed to an absolute end state.

And I think that, and, and from like some of the conversations we had before, this is, this is incompleteness. This is the way in which the most complex problems cannot be completed. They don't have an answer. They only have movement [00:48:00] and balance and tic tacking, right? And the answer is the AIs are really good.

I mean, this is roughly what we mean by abduction. They're not really good at, like, having ambiguous phase space answers. They often have, like, point answers. This is the right answer. That's the

John Willis: right answer. I read a little bit about the mosquito, right, where they try to use DDT to mosquitoes. It like started killing the cat, so I forget what it, you know, like, it literally these compound problems, you know, something I never thought I'd be sort of asking you and this podcast related to this subject, which is where the Rand Corporation just shows up.

Yeah, all over the place, you know, like, in 2 books now, it's just incredible how we talk about Bell Labs. Like, it's the glory land of, like, inventions. And the more I learned about when I went back to sort of Deming's roots and Simon and Norbert Wiener and how RAND just keeps showing up. And now in AI, it just shows up again and again and again.

It's just. There's something amazing there, right?

Dr. Jabe Bloom: [00:49:00] Yeah, I mean, it is like, it is one of the original think tanks that was based in technology, not just in, Some sort of sense of politics, right? And, you know, if you'll look back at the history of the internet, they're like, the RAND Corporation, I think it's even in Pittsburgh, their office is like the first not university node ever on the internet.

And,

Dr. Jabe Bloom: And so, like, they're really, really early into this IT thing and understanding this thing, and there's a set of them you know, part of the reason why I talk about Simon and the, and the nuclear problems of, of the Cold War is because a large amount of the research they were doing had to do with figuring out what was happening and, you know, Quite famously have you ever seen Dr.

Strangelove and you got the crazy [00:50:00] doctor who's like the Nazi, who's like, just bombed them right away.

Yeah.

Dr. Jabe Bloom: Yeah. That character is based on a random play who wrote a book called on Thurman thermonuclear war, in which he literally proposes that the best answer to solving the cold war is to just launch all of our missiles.

Cause we will win. That's that's his answer in the book. So the Rand is a very interesting organization. And like you said, it's got his fingers in all of this early stuff and had access to all these scientists through the early Internet direct collaborators. And again, Simon's like,

John Willis: And I'll know that was that was the 1st thing that Dr.

Wood said, are you writing about Alan rule in your book? Like, like, you want that was litmus test. Like, if I, if I said, who's out, he would have hung up on me. You know, so, but. So let's bring it back to Eric Lawson. So, look, overall, what, [00:51:00] like, this is the weirdest book report review we've ever done.

What did you expect? Yeah, of course, I know. But what, what would, what would we say about the book? Well, you know, what, what is the sort of value? What, what did you learn? I mean, you, you said you, you were surprised by some things, sort of, in summary. Yeah,

Dr. Jabe Bloom: so I think it's a great book for people to learn about the types of rationality.

I think it does a, it's one of the best comparative analysis of those, like for an average audience. I think it's really amazing that, that side of it. So I think that's something to really keep in mind. I think I think in general, if you, if you care, if you care about the AGI dogmatist argument, I think this is a good argument to have on board if you want to have fights with people who are worried about AGI, probably a useful one to have on board. [00:52:00]

But again, in the same sense that we see Dreyfus and other people make arguments against AGI, it doesn't seem to have any impact on what is actually being produced and that's machine learning and blah, blah, blah. And to me from an ethics point of view, from a politics point of view, from a practical point of view, from a, like, how does this change the world point of view?

Like the question is like about what exists and I'm not, I'm not. Necessarily super interested in having in fighting with people about, you know, dreams that they might have about the future. I'm more interested in trying to keep people focused on the implications of what exists.

John Willis: And I think that that is important.

So I guess I do have a last last question. So what what are you finding useful in your sort of day to day? With the AI stuff, is there any particular things that, you know, you know, like sort of like how you either [00:53:00] do research or how you're solving problems or how you help organizations solve problems?

Dr. Jabe Bloom: Sure. I mean, I think a couple of things. I think I think AI can be a really good opponent at times. And what I mean by that is I can feed it something and say, why am I wrong? And it will produce a set of kind of objections that I can then ponder and think about and that, that concept, I've proposed that concept in human systems for a long time it's basically when you enter into the initial critique with somebody, you should just say, like, just tell me what's wrong.

I don't care. Like, don't bother to tell me what's right. Just tell me what's wrong. So that I, so that I hear a set of objections in a safe space, like who cares if the AI thinks I'm wrong. But it can give me a list of things where I'm like, if a human actually makes that objection, I've heard it before.

I've thought about it before. I can kind of think my way around it. So I think that's interesting. [00:54:00] I've done you know, console and a couple other kind of ID programming language versions of this and I, I, it scares me sometimes how good it is at producing pretty simple things, but like elegantly and just much faster than I would be able to do.

I, it, it

John Willis: is definitely getting better too. Yeah. Yeah.

Dr. Jabe Bloom: I mean, I, I told you I, I like was playing with the Red Bead game and I wanted to see if I could make a yeah.

A control chart to explain to people. And I had some, you know, I wanted to be able to have upper control limit, lower control limit, all sigmas.

I wanted to have you know, performance limits, things like that. So it was like a relatively descriptive thing. And I, and I, Played with it for an hour and it, it produced what it would have easily taken me a week to produce less than

John Willis: now. If you get it, it's, you know you got to ask the questions, [00:55:00] right?

Yeah, that's right. I, I got a good story for you. And then we'll sort of wrap up the I know you're going to kick at it. So a friend of mine is about a month or 2 ago, came up with somebody that grew up with, and he was in it his whole life, but he doesn't like, he's sort of like You know, he's retired now from IBM, like 30, 35, 40 years IBM.

You know, he, you know, he gets in an eight, does a great job, gets a four, doesn't think about Shakespeare and literature. And so now that he's retired, he's going into this, all this stuff about like Shakespeare is It wasn't Shakespeare. There's this whole, like, group of people that are sort of have the, that it wasn't really Shakespeare.

It was some duke. Like, he tells me all about it. It's like crazy stuff, right? And so he was talking about how he doesn't like AI. And I said, well, I, you know, you know, it doesn't tell the truth. I'm like, yeah, yeah, you know, we can, he said, I said, all right, let's go into chat GPT. And he's, he mentions this, you know, he said, you know, we, we froze a question, like, you know, is Is such and such, whatever the name of this guy is, they believe is the [00:56:00] guy who actually wrote most of the sort of sonnets and plays and came up said, no, that's a conspiracy theory.

Like I told you, I said, hold on. Hold on. Let's ask it to assume that he is, that this is a true statement and then, and like, give us the arguments of why it's true. Right? And it, yeah. Brilliantly started creating and he's like, wow, you know, but then like on like 10 bullet items, right? Like the eighth bullet items that says it gives the Occam's razor Argument, right, you know, and and so so I go back and I say I something like I was like I was talking like to you almost where I said really you're gonna use the Occam's razor and it apologized Really apologize says you're right.

That is strong. A weak argument. I like it's stuff like that, that that like blows me away, you know, so, so

Dr. Jabe Bloom: nothing. So like. I think the trick, and one of the reasons I kind of brought up crit, crit as an example of what it's good at, [00:57:00] I think one of the most important tricks, and this is again a phenomenological thing, is is to kind of recognize, Dennett describes it this way, I think it's a good enough way to describe it, he describes it as there's three stances people take towards objects.

So they might take the objective stance, which is like, this is a rock. And they look at the rock and they go, it's a rock. It doesn't have any intentions. It doesn't want anything. It doesn't do anything. Now there's panpsychists and other mystics who would say the rock doesn't, but most people will look at the rock and they're like, this is a fucking rock.

It doesn't do anything. And that's interesting. But he says, up from there, there's another stance and it's the design stance and it's the way in which you kind of walk around and you recognize that Even before you understand what it's for, you recognize that an object has been designed, and therefore must have some purpose, right?

That's it, like, sometimes I try to explain it like this, it'd be like walking around in the woods and you find a cup [00:58:00] for the first time, and you look at it and you go, Because this is shaped for my hand, I don't know what it's for, but clearly somebody made this damn thing to do something. And so there's an assumed intention of the object, but the intention is Thought to have been put there by someone else, by a human, right?

So the designed object has an intention, but the human put the intention. The last one he calls the intentional stance. And what he, what he describes here is like the sense in which, if you sit down and play chess with a computer, like almost by definition, you have to assume that the computer is playing the game with

you.

Dr. Jabe Bloom: That the computer wants to win.

In

Dr. Jabe Bloom: other words, because if not, it's not fun, it's not interesting. So there's this way in which humans can look at certain types of systems, interactive systems, as wanting things in the same way a human wants something. So there's a, a [00:59:00] interaction that's intentional on both sides, right?

And I think that the trick with AI is recognizing slipping between the design stance and the intentional stance, and occasionally being like, I should really be paying attention to the fact that the AI doesn't have intentions. It does, it's, it's a trick that it's playing on. And if I can just do that and say, no, no, no.

Yeah.

Dr. Jabe Bloom: This is just output that I can consume and structure. I shouldn't assume that it's intelligent.

John Willis: I think we're going to get used to this. You know, they like, Like that, that Occam's Razor thing. It had no intent. It wasn't human. It wasn't, but, but it had enough sort of sort of data that could answer that question.

And yeah, and I think the mistake we make with fears and, and I think this is a good way to even wrap up the AGI argument, which is we, we, we, we give it. More we give it intention. That's right when right now it really is just [01:00:00] incredible complex math at a scale Creates this seeming again. I think the the playing computer chess like I and and that's what we've gotten used to that Right, you know andrew's cursed me now.

I've got that chess game on my And I I play it all the time now. I'm like i'm so mad at him I mean, it's fun because as I get older it's keeping my brain active but but as like puzzle solving You but yeah, I mean, I don't like maybe back 25 years ago, people were like, Ooh, how did it do that? You know, and now we don't, we, we, I think even Lawson talks about this in his book about like at this point with chess, we know it's, it's like a calculator early on with chess.

And I think, I think we're going to get more used to be able to interact. You need computers and do these sort of you know these sort of these dialogue like, you know, like you go back to Eliza, right? Like, I mean, that's what I was going to bring up in time. It was sort of useful. Yep. You know,

Dr. Jabe Bloom: the inventor of Eliza got [01:01:00] ripped at all of his compatriots, right?

All of it, all the people who's working with him because they kept on saying, this is proof of artificial intelligence. And he was like, no, this is proof that the computer can fake. Being intelligent and it's tricking you. And

I

Dr. Jabe Bloom: think it's really interesting to think about that to the extent that like. It was a very simple program and he figured out some very basic heuristics that worked very well for tricking people.

And he would say, like, people would talk to it for hours. And, you know, the fit, there's a famous story where one of the first people he showed it to was his secretary, who then proceeded to ask him to leave the room. And he was like, why? And she was like, well, this is a pretty personal conversation I'm having with the computer.

And he was like, wait, what? But also, like, how, like, the, the number of reports that we get right now, of people being concerned about finding out about privacy rights around AI, late, because what have they been doing with the [01:02:00] AI?

John Willis: Well, yeah, yeah, that's a whole other story. They've been

Dr. Jabe Bloom: trying

John Willis: to, you know?

I think it's just interesting. But you know, it's funny too, though, like, you know, like the, you know, I, I, if you're saying you've never thought about going or haven't seen a therapist, then you're lying to somebody. And I've had some good therapists and I've had some bad ones, and I don't know that, like, spending an hour with Elijah, which I've never done, Is probably any different than a crappy therapist that you find out after the third week, you know, absolutely They're just sitting there, you know, sort of like oh, yeah.

Yes. No, that's good. Tell me more, you know, like You know the sort of keyword thing, you know, like I was looking at our clock i'm saying me like there's no possibility Not having two parts You doing but I guess there is another question. I I Sort of begging to answer. You just went through your PhD.

I know, like, I was sort of with you near the later stages of it. And I know how hard that was. And it was something that you've been working on so hard for years. And, and so what do you think about education and all this [01:03:00] stuff, you know, and, you know, sort of like, I know you would, because you're a scientist, but like, throw out the, well, I had to do it the hard way.

You know, I, you know, I had to walk 15 blocks to go to school. I mean, what is, is there sort of an interesting view of maybe it's inevitable, but. What does education do with all this stuff?

Dr. Jabe Bloom: Yeah, so again, I I think One of the the things that even design doesn't do well, very well It's under theorized is critique is criticism So like you and I have talked a little bit about like the idea of curation as being one of the kind of key things New abilities that people are going to have to have in this new economy is like, how do I curate the right things?

The other version of that, I think the precursor could be in good curators, being good critic, being able to like, interact with things in a critical way. And I don't necessarily mean critical thinking. I do mean, because it's an interactive system, like asking the right questions, poking and prodding at things and things like this.

So I was talking to my. brother in law who is a [01:04:00] professor of behavioral science at Carnegie Mellon. And one of the things we were talking about is for his stuff, where he's creating models mathematical models of social problems, and then running data through those models to get answers. He basically said It would be irresponsible soon.

It will be irresponsible to not teach these people how to do AI to do this because There's no way that you could build better models from scratch than the ones that the AI can produce. So you can improve on the ones the AI can produce, but it can just produce. base models that are much more sophisticated than most of these kids are ever going to be able to produce anyway.

So it's ridiculous. And he was pretty down on it and, and thinking that it was like I mean, there's a chance of like kind of that illiteracy or that de skilling problem that you see, right? Nobody will understand the models because no one has [01:05:00] built a model in for so long, stuff like that. But one of the things that I tried to talk about to him about through this kind of critical lens is, wouldn't it be like, Part of the educational process then to produce in front of the students to produce models and ask them what's wrong with the model to like part of the process then would be to assume that the model is being produced but say you the humans here that are specialized and are being trained at a high level university what we expect you to be able to do is Figure out what's wrong with the model, like, what, what, what's missing, what, you know so that you kind of have this base theory that like, again, your responsibility as a curator, your responsibility as a producer is being a good critic.

And this is, again, gets back to the things that I pointed out earlier about like, assuming intelligence, assuming, like the trick is here to say, I'm [01:06:00] not, As the producer of something, I'm not going to say, oh, the AI made that. I'm going to say, oh, I made that with an AI. Completely separate conversation because I'm retaining responsibility for the outcome.

So I have to like, in the same way someone would come to you and be like you know, this thing that you made screwed up. If you said, well, the, you know, Billy three, three rungs down from me on the corporate ladder who sits over there, that's their, his problem, not my problem. No, it's not. It's your problem.

Oh, you know, you, your response,

John Willis: but I never liked the abstraction argument. Like the automation is a great one, right? It's never held true. Like, Oh, if we automate all this, then they're going to lose those skills. And, and there's something to that. And I think there's a fine line of figuring out which things are dangerous.

If you don't know the backstory, but which things are. Yeah, probably a science unto itself. But in general, my experience has always been that argument is never held up.

Dr. Jabe Bloom: I think the question is again, [01:07:00] like we get back to this, the same thing we talk about occasionally too, is like, what's the economically viable ignorance, right?

Like, yes, you could do skill stuff, but is it economically viable to have, not have those skills around? Right. Like we don't have stone car, tons and tons of stone around.

That's right. Right.

Dr. Jabe Bloom: That was knowledge we once had. Do we need to keep that knowledge around? How much do we need to keep it around in order to preserve the things that we already have?

Those are all questions of how we just kind of deploy our resources. And so, like, it's not, it's, it's, being skilled in a particular trade is not an inherent good. It's an inherent good in relationship to a certain context. If the context changes, maybe the skill isn't as valuable. But it doesn't come without a price, I guess.

John Willis: Right, there is always a price. And I guess that's the fine line of the, what is the price that, you know, yeah, I mean, go on. I was thinking about the, you need to know what, you know, sure it did to run a successful. I mean, he's probably not a good example, but [01:08:00] even what Oh, no, did. Do you really need to send TPS like the stuff you guys teach it ergonomically, right?

Like, you don't have to go back to own on every well, I mean, we have to stop here because what we're telling you here actually came from, you know, lean and agile and goes all the way back to own it. Right? Like, like, like, that would be sort of nonsensical. Right? But well, good. Yeah. I, you know, like, I think everybody who listens to this podcast, but maybe every once in a while, somebody will stop in.

And so where do they find you? And we'll have all your stuff in the show notes as always. But what would you like people to know about what you're doing right now?

Dr. Jabe Bloom: Sure. So Dr. Bloom, Dr. Jabe Bloom you can find me at Ergonautically working with Andrew Clay Schaefer, Sasha. We work on helping people build work systems and work with the work systems they have to improve efficacy and efficiency.

And I think I'm in the middle of writing a book like John does occasionally. So we'll see. [01:09:00]

John Willis: Yeah, that'd be cool. I'd love to see that. I'm sure a lot of people want to see that book. Hi, my friend. It's always a blast. It's great to hang out with something cool again, which I'm sure we'll find something in the near future to chat about.

Take care.

Previous
Previous

S4 E23 - Tracy Ragan - Tackling DevOps, AI, and Women in Tech

Next
Next

S4 E21 - Erik J. Larson - The Myth of AI and Unravelling The Hype