S4 E20 - Dr. Jabe Bloom - Navigating Complexity with Pragmatic Philosophy

In this episode of The Profound Podcast, I have an enlightening conversation with Dr. Jabe Bloom, a prominent voice in the fields of DevOps and digital transformation. The discussion revolves around the philosophical underpinnings of scientific reasoning and its application to complex systems, particularly through the lens of Charles Sanders Peirce's work on abductive reasoning.

Jabe Bloom begins by exploring Peirce’s contributions to philosophy, particularly how Peirce's concept of abductive reasoning offers a framework for making educated guesses in situations where data is incomplete or variables are unknown. This idea becomes especially pertinent when Bloom contrasts the scientific method typically used in complicated domains, like Lean manufacturing, with the unpredictability of complex systems, where multiple hypotheses might be equally valid.

The conversation further delves into how these ideas connect to digital transformation, especially in organizations navigating the complexities of modern IT and business environments. Bloom highlights the importance of fostering environments where experimentation and educated guessing are encouraged, as this aligns with Peirce's pragmatic approach, which values the usefulness of an idea over its absolute truth.

To wrap up, we also discuss the broader implications of Peirce’s work on modern AI and socio-technical systems, emphasizing the need for a deeper understanding of how these systems operate and how to integrate artificial intelligence into complex human processes.

Resources and Keywords

People:

  1. W. Edwards Deming

  2. Charles Sanders Peirce

  3. C.I. Lewis

  4. Steve Spear

  5. William James

  6. Ludwig Boltzmann

  7. Albert Einstein

  8. Mark Burgess

Resources:

  1. "Mind and the World Order" by C.I. Lewis

  2. "Toyota Kata" by Mike Rother

  3. "The Myth of AI" by Erik J. Larson

Terminology:

  1. Pragmatism

  2. Abductive Reasoning

  3. Scientific Method

  4. Complex Systems

  5. Complicated Systems

  6. Cynefin Framework

  7. Operational Definitions

  8. Positivism

Transcript

John Willis: [00:00:00] Hey, this is John Willis. This is The Profound Podcast. One of the things I always forget to do is, I don't have sponsors, but what I should at least sponsor is my book. So don't forget to take a look at my Deming book, Deming's Journey to Profound Knowledge. And then for those who don't know, I have, it was stories that were cut out of the original book.

It's just a book that's out called Profound Stories. It's like 18 chapters of stories that didn't really fit the system of profound knowledge arc, but were great stories and didn't want to just throw them on the cutting room floor. So anyway, check out my two books.

Hey, everybody. It's John Willis again. The Profound Podcast. I have one of my favorite all time people, Dr. Jabe Bloom, by the way. I got to sit in on his dissertation, which was like, incredible. That was like, that was mind blowing. I feel privileged to be there. So how you doing, Jabe? I'm doing pretty good, man. [00:01:00]

Jabe Bloom: A lot of still doing lots of research trying to like, figure out the next thing to write down things about. So

John Willis: yeah,

Jabe Bloom: conversation today.

John Willis: What do you do when you have a PhD? You don't go for another PhD, right? People do, but I think that seems a little

Jabe Bloom: bit masochistic.

John Willis: Yeah.

Jabe Bloom: Yeah. I mean, I think that my next step academically is just start publishing.

Okay.

John Willis: Yeah, of course.

Jabe Bloom: Where I want to publish and what type of stuff I want to publish.

John Willis: Yeah, yeah. No yeah, no, it's really good. So anyway, so so if you don't know, Jabe, you've been hiding under a rock you know, Jabe is, you know, really one of the stronger voices and we could call it DevOps. We could call it digital transformation.

We can, we can give it a lot of names, but it's been an incredible mentor to me. Was really helpful. You know, I, I say this, I don't think I've ever said this to you, but I think I pulled a Tom Sawyer in my Deming book, right? Like, when I heard that Deming said it took him six [00:02:00] tries to read Mind and the World Order, I'm like, you know what, maybe I'll ask Jabe to read it and explain it to me.

And you gave me a whole Overview of pragmatism and, and which really worked out well, and actually, I think it has a lot to do with what I want to talk about today, which is C. S. Pierce, because you went beyond just explaining C. I. Lewis's mind and World order and, and, and how Deming was related to pragmatism and that definitely influenced him, right?

We can see that all over his work. But But, but the, you know, the understanding Pierce became a big part of my book because I thought that really explained a lot to me, you know, just the pendulum, you know, his experiment and then pendulum. What, what, what's the, what's the diminishing point of returns, right?

Like, it's so, and which becomes the pragmatism, but I think, What's really interesting is, we had, you had some, you gave me [00:03:00] sort of a couple quick overviews of abductive reasoning, and I remember you said to me something, it's like a murder mystery, and it stuck with me, and it's funny because I had never really heard of abductive reasoning, I don't have a degree in philosophy but a lot of people I talk to are really intelligent, they give me the same line, I've never heard of that, what is that?

So, I think there's an interesting thread about Pierce. Abductive Reasoning, what was Pierce's relationship, like, how can we connect the dots to Pierce to AI? Or how can we connect the sort of anti dots, if you will, to Abductive reasoning? And and then, you know, we want to maybe talk about the Myth of AI book. So I'll start off with What does Pierce mean to you and your work?

Jabe Bloom: So this is like a shibboleth thing and there's no way to know this until you've bumped into it already, but his name is spelled Pierce, as in [00:04:00] the like puncture something, but it's actually pronounced purse. Oh, there you go. Which is kind of interesting. I, I, I found this out by giving a presentation, a bunch of academics who all you go afterwards and we're like you're not pronouncing the name.

Anyway. So it's just a weird little shibbolethy thing. Like I said, nobody has any reason to know that we got that.

John Willis: We have that fixed now. So

Jabe Bloom: exactly. So for me a lot of my early work in philosophy, bridged continental philosophy and pragmatic philosophy and specifically within the pragmatic philosophy scientific philosophy.

So philosophy, science stuff. And that had a lot to do with being. interested in and trying to understand experimentation. What was, what's an experiment? How do we come up with experiments? You know, if you look at a lot of my early talks, I had [00:05:00] talks like how to fail well or how to think like a scientist.

These were like the things that I was primarily exploring where I wanted to understand philosophically, what is it that our scientists are doing, right? Like how are they thinking? And that itself kind of bloomed or came out of the fact that in my mind, Lean is basically an attempt to teach frontline workers in industry the scientific method, right?

So that's, that's the way that I generally think of what Lean is actually doing underneath the covers, right? And it's the difference, Historically, between like training with an industry where the main American approach to industrialization was to like, figure out how to teach someone to make a bullet or, you know, fold the shirt or something like that, but these very specific [00:06:00] learning processes, which are great and have been clearly adopted and integrated into lean theory.

But that's more kind of like on the Pocoyo side of things, like the way that materiality shapes what you can understand and things like that. So there's a whole subset of like stuff in there. But in general, this idea, and you see it like the clearest, sharpest expression of this and kind of the modern literature would be like the Toyota Kata theory, right?

That idea of what Toyota was doing is just explicitly this idea that what you're doing is teaching people how to be scientific, how to be rational. Anyway, so a lot of my early research then collided with Kenevan. Or or complexity theory. And so then this is very interesting question. You have to kind of work through or understand, which is can you use the scientific method the way it's described in general to explain things that are happening in the [00:07:00] complex domain?

Inside of emergent systems and blah, blah, blah. And what does that mean? And how would that work? And for a long time, a lot of the reading and thinking I was doing was basically that. The traditional scientific method probably mostly applies to things that we would call complicated systems. Things that we have reasonably good theories about and that we're trying to gain control over as opposed to simply understand at all.

And that there must be something else out here that's scientists are doing in the complex domain, but it's not this hypo hypothesis confirmation or hypothesis verification or hypothesis falsification activity, right? And so Pierce is the person who I think gets closest to beginning to open up about what it is a scientist is doing [00:08:00] prior to having a hypothesis.

And that's what I think you're doing in the domain. Basically. Yeah,

John Willis: that's interesting. So, like, but yeah, I know you already blew my mind. We're only like, 10 minutes in. So, so basically, you know, just for, I mean, you know, at this point, the quick, quick, you know, sort of a comic book version of Kenevan complicated and complex, right?

Complicated is you take a car apart, you put it back together. Complex is driving a car when it's raining and the kids are screaming and all those things, right? So, so is the is the discussion that, that you struggled with Like, like, so scientific method in Toyota's sort of like Toyota car or Rother's book, right?

Like, like, everything is an experiment. And I think Steve Spear has a great quote in his, you know, they were a community of scientists continually experimenting, but in a lot of ways, and like, I think we can go on a tangent and say, everything's a complex system, [00:09:00] but we'll, we'll table that for this discussion, but building a car, Is it is more sort of grounded in a complicated domain?

So like, so the whole Toyota Kata and scientific method of thinking about things in terms of like, I'm going to do this. I'm going to experiment. I'm going to change the light. I'm going to change the way the tools and the can bend boards go. But you're saying that that it's, you see, I guess I never thought about the difference of in a complex domain.

You question whether those. There is work

Jabe Bloom: absolutely like just initially right so like in a complicated domain. We have a shared thing about Science which is controlled variables, right? So like we have a controlled environment you have controlled variables etc And the whole point of a scientific method the method is to say that this particular variable is important Into the outcome, and I know that this particular variable is important because I've isolated the other ones.

Yeah, so I've created like a clean room [00:10:00] and I have a stable temperature and you know, whatever your kind of isolation processes are. So all of that is literally. Reducing the complexity of the system that's being experimented on. That's what you're doing. You're isolating the system away from the complexity of a normal environment.

Like, most experiments that are physics experiments, if you tried to do them in your front yard, Without any controls, I just wouldn't produce predictable results, right? They have to be in these controlled environments. So that is a very kind of like a physicalized or a, you know, material version of the reduction of complexity of a system into a complicated system.

And so when you're looking at something like a car or these other ideas. At least in theory, within the Toyota Kada and other things that Lean is doing, the set of variables is probably already pretty well defined. You might say, I'm not sure which [00:11:00] variable is the most important for solving this particular problem, but the variables that you're looking at tend to be, you know, laid out.

They're pretty clear, like, how many bolts does this machine make per hour, et cetera, right? In the complex domain. One of the things that I think primarily differentiates, like, thinking in a complicated way or thinking in a complex way is when you're thinking in complexity, you have a sense that the system is going to stabilize or come to be pattern oriented or repetitive, but you, in fact, what's missing is either you're not clear what the important variables are, And Or you're not clear what the important relationships between the variables are.

And so that's what makes it appear complex, is you're just not sure what variables matter or how the variables are interacting. And so even if you can see a pattern arising in that domain, you can't necessarily say, I already know which variables to kind of examine, right? And so this is where you get kind of a [00:12:00] temporal version of, Experimentation and thought because what you're looking for is patterns and repetitions and things that seem to differ or stay the same over time.

These are like the ways in which you start exploring complex domain and in industry in general. You can't survive if you put your whole organization in the complex domain, because it's just, it's basically constant guessing. Right? And you can't guess your way into market share or into a stable price or into just in time delivery of anything.

You have to have control over the system. So again, I think what you can kind of end up seeing is things like statistical process control, et cetera, all those things that are generally attempts to understand the mechanical relationship between a set of known variables, that's complicated, and that uses scientific method well, [00:13:00] and those are important.

So that's rich for that. You can try to use those methods over in the complex domain, but my fear tends to be that that gives people the excuse of treating complex problems as if they were complicated. Right.

John Willis: So, so this is where I guess my inactivity or sort of like I, I just always assume that like we can apply you know, some theory of knowledge or you know, the scientific thinking or scientific method to sort of.

Any domain and even more fascinating the complex domain and I'm trying to scramble like, like, don't we sort of do that in our environments so that I so that's sort of one thing is like, I never thought about it is like, am I sort of misguided or my thoughts are really that you can't or I haven't thought about as deep as you, but I'd probably want to move quicker to what, what was it.

Purse telling us. Yep. You know, or, or the insight that you saw, you said it was sort of [00:14:00] the, the, the ideas before. Yep, that's right. Scientific group. So, so I guess on the first question before the hypothesis,

Jabe Bloom: right. Prior to the hypothesis. Right. Okay. That's so, so basically one way to think about this is, so there's a hierarchy of rationality that is standard issue and it's deductive versus inductive, right?

So deductive is any syllogism syllogisms are things like one plus one equals two, right? There's like, you're not adding any information in that statement, it's just a true statement, right? You know, Socrates is a man, men are mortal, therefore Socrates is mortal. Right. Again, You haven't added anything.

You've just kind of, like, linked statements together linked predicates together there. And so deductive reasoning is very good at eliminating things that are absolutely wrong. Right? Like, You can't if you can't kind of construct this [00:15:00] thing, or if the construction is incorrect, then it means that that thing is probably not true.

Right? Or that you're, you're, you're have a, some sort of bias or some sort of foul. So all of that deductive logic, all that type of stuff tends to lead towards those you know, traditional structures of logic and rationality and therefore towards understanding things like biases and argumentative, argumentative fallacies, right?

So, you know you get a lot, a lot of Kind of people online who are initially interested in philosophy, and they'll, they start a limit, you know Criticizing you by producing tons and tons of, like, fault fallacies as, as counter arguments. Is that just everybody? Yeah, exactly. And so the, the problem with that is that it's just, it's, it's only, it's, it's treating all arguments as if they're all Inherently deductive arguments and most arguments people are having are not actually [00:16:00] inductive.

I want to, I want to,

John Willis: I want to come back to that because that's something in, in Lawson's book about like, what if deductive is wrong, but I want you to keep going with the inductive and what you're doing with this.

Jabe Bloom: So inductive logic, like the classic one that most people know about these days, probably comes from Taleb and Taleb uses the black swan theory of, of kind of explaining this is how most people kind of get introduced to inductive.

And it's important for me, at least, it's important to point out that inductive basically means some sort of rationalization to the best. Explanation of something. So all the swans I've ever seen are white. Therefore, all swans are white. That's inductive, right? There is no proof there that you'll never ever see a black swan.

And of course, eventually we go to Australia and we do see black swans. And so the induction proves itself to be false over time, right? But in general, the way that [00:17:00] induction works is that. It's a pretty good guess. That's a pretty reasonable thing to state that all swans are white. Because all swans in England are white.

And so, if you look at something, it's white and has this shape. It's a swan. And so, you begin to move from kind of a set of absolute rationality, kind of deductive, to a set of things that is based more on probabilities. And what if the probabilities are in theory based on prior experience, right? So I've experienced swans before and all my experiences, they're white, therefore all swans are white, right?

But that's a probabilistic statement. It's not an absolute statement, right? With the probabilities, just like we've talked about kind of before inside of any sort of like you know, When I, when I think about Shewhart's you know, the economic control of systems and stuff like that, probabilities are about not, not necessarily caring [00:18:00] that you're absolutely right, just that it's productive to think this way, right?

The

John Willis: analytics of it points you in a direction, right? That's

Jabe Bloom: right. They justify your belief is kind of the way I tend to say it. So, you have a justified belief if you have an inductive belief, right? And so you're starting to get close to pragmatism right now when you start saying, I don't really care if it's absolutely true, I just care if it's productively true.

And this is roughly what Pierce starts meaning by pragmatism, which he, he actually calls his stuff pragmatic isms because he wanted to kind of highlight his differences between like James and some of the other pragmatists. Those, those other pragmatists tend to be more concerned with things like community knowledge.

How do we share knowledge and stuff like that? And the way that pragmatism affects those theories of knowledge. And Pierce was not particularly interested in [00:19:00] that. Pierce was interested in this question of like, how do scientists do the work that they do? And so he comes up with this third form of logic.

He, he, by the way, it, you know, three is the magic number. He's big into threes and he's got actually a bunch of mathematical proofs about why three is like the most important number. So which we can talk about a little bit, but in abduction, so abduction means bad. Lodge bad rationale literally means not good, right?

and the traditional Way of stating what abduction is. This is a good guess or it is a statement of truth where the truth is Value of the statement is its explanatory power. In other words, it explains something, doesn't necessarily tell you how it works. It just explains what [00:20:00] happened, right? And that's why we get like the murder mystery version of it, right?

So in the murder mystery, we don't need to explain the physics behind someone being killed. We need to explain how that person got killed and who killed them. So the explanatory value of the statement is what, what matters. You don't need the kind of underlying scientific aspects of it. And this is important again, because it pragmatic starts to indicate that like, that's what you should care about.

And when you're rationalizing about something that you don't need to care about the deductive underpinnings. You don't even necessarily need to appeal to the inductive underpinnings. You can just say this explanation seems reasonable. And there's two qualities that I think are important to notice there.

One is we get towards this idea of what we mean by a hypothesis again. So if we only had deductive reasoning, Hypothesis wouldn't necessarily be important at all [00:21:00] because the deduction doesn't add anything by being exercised. Does that make sense? So, like, you can add 1 plus 1 equals 2 over and over and over and over again.

You're not experimenting with anything. Right. Right. Inductive stuff, like, you know, all the swans are white. There is the potential for adding information, right? There's a potential for an experiment to make some positive stuff there, but in that case, you know, you've already started to isolate the, the variables that are important, the whiteness, the color of the bird is an important.

John Willis: Would you say that at that point, the predictability of the probability of it becomes more like a deductive? That's right.

Jabe Bloom: That's right. You're getting closer to what we would mean by deductive

and

John Willis: you'd like at some point. Hey, wait a minute. All swans are white. Like, like, this, this is almost right.

Jabe Bloom: That's

John Willis: right.

Jabe Bloom: And so like one of the qualities of abductive logic [00:22:00] is or one of the qualities of scientific experimentation in inductive realms is surprise. So the truth value of something. is highlighted by the fact that you would be really surprised to find out it was not true. So in the case of all swans are white, when you, people are surprised to find black swans, right?

And it's the surprise that indicates their, their like belief and their, you know, belief enough to take action or to set, have a set of true justified beliefs. That's what gets violated. That's what causes surprise. So in theory you know, when you're looking at true scientific experimentation, at least if you're going to take like a Popperian view of it, where falsification is the most important thing, the only value of doing experimentation is to prove dogmatic inductive beliefs are not correct.

So, like, the value for Popper in [00:23:00] doing experiments about white swans is to literally invalidate that statement, to say that that's not a true statement. Right? And that's why, for Popper, it's really important the structure of a hypothesis be falsifiable because if you don't structure it in a way that can be falsified, then you end up doing all these experiments that only reinforce the inductive statement.

So that's the one thing is that you get this first blush of of, of abductive logic ends up being, it is about the way that the scientist thinks before they induce the hypothesis. So it's the guess it's like, so Charles Pierce has this great kind of example of it, or a great way. Yeah.

Interesting way of stating it. He basically says abduction is like laying down under the field of stars and trying to guess which things you're looking at are stars versus other galaxies. Right? [00:24:00] And he's basically saying, like, with the naked eye, humans can't really tell this, but you could make some guesses about where to look.

And this is what he means by, like, how do you find the first galaxies? How do you find, how do you differentiate these things? Well, someone, at some point, not inductively, because they've never experienced galaxies before, guessed. That thing over there is not the same as these things over here. That thing over there is like a blurry disc, and these things are sharp pointed things.

That must be something else. Where are other things like this? And the guessing about where the other things are, that is abduction, right? And so, eventually, you know, if you do astrophotography long enough, you realize that there's like galaxy season and there's nebula season. It's because galaxies line up.

Against the axis of our galaxy, and it's not really the line up. It's just that there's so many if you look through the edge of the [00:25:00] Milky Way, there's too many stars in between you and the rest of the universe. But if you look out the side of the galaxy, so, you know, the galaxy flat like this, you're looking up.

Or down, there's less stars in front of you so you can see more galaxies, right? So if you're, if you're guessing, you eventually guess, hey, like, we should look over here, or we should look at this time of year. And so it's the guessing that makes it Interesting. Yeah. And then the thing that and most people I think get there, right?

So they get this thing that there's some people like there's other pragmatic philosophers that call it like the mirror of nature. And what they mean by that is that there's some way in which the human mind is evolved to be in relationship to the things around it. And therefore, that evolution kind of hints you towards certain guesses.

To the

John Willis: instinct or common sense, right?

Jabe Bloom: That's right, yeah. Some, some sort of way in [00:26:00] which you're a, you're, you know, in, in the language that I would tend to use, you're, you're attuned or capable of tuning in with your mind. So it's like saying, like, You can tune a radio, but the reason you can tune a radio is because it's designed in a way.

It's a circuit that is made to, like, interpret certain wavelengths, right? That's basically what Pierce is arguing about the human mind. The human mind can be tuned, just like you could tune a radio, to, like, notice certain things about the universe, because it is all in relationship with the universe. So that's one of the things.

And I think most people get there with abduction, but the thing that most people miss that is that that helps you understand or helps me, hopefully it helps other people understand the difference between hypothesis and complex complicated and this. Abductive reasoning in complexity is that induction and deduction use the [00:27:00] fact that there's one answer as a, a way of justifying the truthiness of that answer, right?

So in induction, you can't have multiple guesses because the whole point is it's supposed to be reasonable and in reason. You only have the one right answer, right? In abduction. The answer is you can have multiple concurrent, equally valid answers. That you have to accept is equally true. Yeah, so you have to have these like other versions of truth Which is why pragmatic truth becomes important in relationship to abduction Because pragmatic truth basically says it doesn't really matter if it's absolutely true or not It just matters if it's productive or not.

So you can have multiple guesses about what's happening all equally productively true They help you move forward and you don't have to eliminate. You don't have to eliminate them in order to make progress, right? And so often in complex domains, what we get is. [00:28:00] Multiple hypotheses, multiple guesses about what, what the hell's happening.

And the, the trick with using abductive logic is to not try to eliminate those things before doing any further work. You're not trying to reduce the set of guesses. You're actually trying to say, These guesses, this is a set of guesses that are all reasonably accurate. We should experiment on all of them, not just one of them.

Because we're not trying to get to the controlled statement that we talked about before where we know which variables, etc. What we're trying to do is just provoke the system and see what's happening. And our guesses are more like This is a good place to poke the system, then an explanation of what will happen when we poke the system.

John Willis: And so, so let me, if I just take a try at this, right, like, so the abductive approach could be like, this team is highly productive. But this team doesn't bring in a whole lot of revenue, right? Those two in maybe a sort [00:29:00] of a, maybe, and I'm stretching here, maybe in a sort of a deductive, or I'm sorry, inductive fashion that the that like the one would be a truism.

They're productive, my goodness, right? Which we know is sort of like what we think about in DevOps. We should always be questioning. So, so what I'm trying to summarize here is that in the complex domain, like you said, you need to poke at things. And not as sort of truism or like, this will answer all my questions.

We need to be able to poke on this to tease that out where these 235 things can all be true. And they may seem contradictory.

Jabe Bloom: That's right. So you're, it's more like consistently exercising the system to see what the system will do then. Trying to do something specific to the system, knowing what the explicit outcome will be.

John Willis: The

Jabe Bloom: hypothesis, the theory is you can close the loop. You can explain the whole loop. In a complex domain, you're only saying, I think if we do this, that thing over there will wiggle. I don't [00:30:00] know why poking here makes that thing wiggle, but we should poke here until we can figure out how those two things are connected.

Right. But it, you know, what are the examples Using abductive logic or an example of how to think about how you might use abductive logic is you have a problem and you have a group of people that are involved in the problem in an inductive or deductive rational system, you put them all together and you'd say, what I want you to produce is is 1 experiment that we should run next.

And you guys need to talk about it and figure out what the right answer is. Right? And don't come to me until you have that 1 thing because that's inductively. Explanation. The best explanation of what's happening, right? In abductive or complex domains, with abductive logic, you might actually take those ten people and divide them into groups of three, and say you all can't talk to each other at all.

Because what I want is each group of you to make guesses about what's happening, and I [00:31:00] don't want you to cross contaminate each other with the guesses. Because we don't know what the good guesses are, and And what we're worried about is that you will rely on your instinct to do deductive or inductive logic to try to get to the best explanation.

So you'll collapse into what sounds like the best answer. And we won't actually effectively search the entire phase space, the set of possibilities. So instead, since we're not really sure what's happening, we don't even know if we know what the right variables are. I actually just want to have. A group of people, a set three sets of people, all who are simply trying to come up with a good explanation of what's happening by experimenting on the system, but they're not trying to say, I know exactly how the loop works.

They're just trying to say, we think if you poke here, that will happen. And that's justified enough reason for us to poke there a couple of times. Right? [00:32:00] And. So one of the things you have to kind of imagine is that that means that it's like, there's a, so in deductive reasoning, you shouldn't do stuff until someone can show you the math, basically, can't show it to me mathematically, then we're not going to do it.

Yep. In inductive, you're probably asking people a lot about their. Because inductive logic is about prior experience, you're basically querying people about their expertise. I have this problem, you've dealt with problems like this before, how do we solve this problem? You're using inductive logic, you're using expertise.

That person is going to come back to you and say, this is the thing that we should do. They might give you a set of options, but they'll prioritize that set of options, and they'll tell you how expensive each option is, right? In that case, as someone who's managing that problem, you only have to select, you know, how much money do I want to spend for how much confidence level does the expert have, right?

That's kind of the way it [00:33:00] works.

But let's let's skip over to a completely chaotic system in a chaotic system.

John Willis: I thought that was gonna be a rabbit hole. But yeah, so yeah, I was going to ask you about chaotic systems in the

Jabe Bloom: chaotic system. The answer would be that people just bring you random ideas, just bring you tons and tons of random ideas.

Right? And then you would you say, like, why should we do that? Be like. I don't know because that's the best I can come up with. That's, that's, I'm just guessing, right? In complexity with abduction, abduction still has a filtering value to it. It still isn't just accepting anything. It's accepting theories that have explanatory power beyond the other explanations that are happening.

So you can still get some like, those don't explain what's happening as well as this one does. But you don't necessarily have to reduce it to a single one. You could have multiple ones that all have the same explanatory power. You know, there's a weird effect of this in organizations, which is as an [00:34:00] executive or as a manager or someone product owner, things like that.

One of the questions you should be constantly kind of asking yourself is if I treat this like it's a complex system, then I should expect that the team has multiple. Equally valid, equally coherent explanations of what are happening. And what I'm experimenting on is trying to establish a better understanding of what's happening.

If I am treating it like it's a complicated thing, then I should expect that the team can answer. This is the best answer. This is the next thing we should do. Right? And the result of that, from a like a design theory perspective and from like thinking about process is that in a complicated domain, what you would expect to see is a sequence.

Of experiments, so 1 experiment at a time you do the best experiment you can define at that time you control all the variables and you try to [00:35:00] isolate the system. So, you know, the answer, right? And then whatever that experiment produces. In theory produces more information that means that there will be another experiment, a sequence of experiments that move forward in a linear pattern, right?

One experiment at a time. In the complex domain, what we'd see is multiple concurrent experiments. At the same time, because there's no need or attempt to try to control the variables because you literally don't know what variables are even at play. Like, you just can't control the variables because you don't know what it is to control for.

Right? And you can see this the 1 of my favorite examples of this is you've got, like, any, any designer who's designing, like, let's say a chair or products or anything. And ask them to see their sketchbook. What you'll see is they don't do chairs linearly. They sketch a bunch of chairs. Obviously, I have to sketch one first, but that's just physics, but they do it to compare them as equally [00:36:00] valid hypothesis.

Right. Oh, this could be a way to do a chair. This could be. And so what they're trying to do is imagine something that doesn't exist. But how do you improve on something that doesn't exist? Well, you create another thing that doesn't exist and you compare the 2 non existent things together. 2 different abductive hypothesis about what's happening and that's how you make progress is by comparing the guesses together as opposed to the actualities together.

And that is a way in which you can see, progress happening in the complex domain is that you get these multiple things and to the extent that your teams are producing single answers, then there they are by themselves producing a complicated explanation as opposed to a complex explanation of the system.

John Willis: Well, I think of the Columbo, right? Like, you know, I was thinking myself, do the you know, the qualitative analysis stuff that I've done, like I'm, I'm the classic Columbo, like, yeah, you know, I don't know. I'm kind of dumb. I mean, you [00:37:00] know, but I mean, it is that he walks in and there's like all this instinct and it's like, he's tuned, right?

When he walks in that room and there are eight people and the opening scene and whatever it shows, right? Yeah. There's eight people in the room. He's, he's in tune with everything there. And he, he has this instinct that it's probably this person and it's pure guess at this point, because he doesn't have any, there's no sort of inductive barrels, right?

That's it. But then like he So then he, again, his instinct gets, and then he starts driving. It sounds like too, like, you know, I don't wanna jump into, I still wanna cover the myth of ai, but there's this sort of bouncing between abductive, inductive, deductive, like there's mud on the guy's shoes, right?

And now that gives him a little bit more inductive. Capabilities, understanding that the person who murdered a person had to cross the field and it was muddy, right? That, that kind of stuff.

Jabe Bloom: But you also, so you also [00:38:00] get this weird thing in abduction that's not as clear in induction and deduction.

Abduction values or requires the falsification of Other things. So you can imagine it's like, so we've got like four competing theories about who killed the person. There's one of the ways to do it is to say, this is absolutely the person who did it. But we, we know that that's not even something that's expected in court.

It's not like we don't use deduction in court, right? We use induction mostly. How do you get to someone in the court is you use abduction. You guess. Yeah, that's awesome. Yeah. And so. You can imagine these four different suspects are kind of like in a race. Well, one of the ways you can do it is by saying, okay, so there's other people that I have, I have to explain that Bob is the killer to all these other people.

And those other people have guessed that Sally and Bill and Joe are the killer. Also, part of the [00:39:00] investigation has to be to prove that they're not good suspects, right? So I still have to understand how, why are those people, what's the explanatory power of saying Sally killed him? Okay, well, how do I take the explanatory power away from that statement?

How do I prove that it's not true? Right? And so I could use deduction. Sally was physically in a different place and I can prove that I could use induction. Sally doesn't, you know, Sally's. Gets queasy about blood and whatever, you know, like, but I'm trying to like, basically take the four competing abductive guesses and produce a best guess out of that.

And once we produce the best guess, now we're doing induction because we've gone to the best explanation.

John Willis: So so this is like one of those sort of interesting, like two things can be true. Multiple things can be true. Pierce, by all accounts, was a terrible human. Right. I mean, we've, you know, I think you read the myth of [00:40:00] AI, like Harvard locked his papers up for 50 years.

I mean, how terrible of a human you have to be that. But by all accounts, the more I read about him is he was incredibly brilliant. Right. And so one of the things that, you know, I think, I think a lot about, you know, I'm working a lot, doing a lot of research on the history of AI and, you know, and, and actually it was Dr.

Woods who sort of Push me into the peer subductive and we can talk about AI, but, but as I got more into his background, his contributions to so many things, but certainly AI, right? Some of the things that when I went through sort of the list of some of the things he created, right? Like, you know, the I mean, I guess, I guess this is the core of my question.

I, I know I ramble, but hopefully I don't annoy people. Abductive reasoning has been around since Aristotle, right? Like, he didn't invent it, but he is considered the father of it. [00:41:00] Is it because of the math and the way he sort of, like, took lot, you know, took logic to, to math and sort of melded those two in such a way that, that was so powerful, or?

Jabe Bloom: So, I mean Yes, so there's, we got like, you know, there's a couple things like said that the playing field like Pierce is a person is, is working in a time where the other dominant philosophy is positivism. So, the positivists believe that there's an absolute deductive explanation for everything.

John Willis: Yeah,

Jabe Bloom: that's, that's what they believe.

There's a unifying theory that everything could be deductively true or false, right? Yeah. And Pierce is basically not only in reaction to positivism, but also in, in, in, in reaction to a bunch of kind of dogmatism, including kind of like you know, religious dogmatism. So, you know, the, the, the, the, the negative view of Pierce is that he was pretty unconventional at the [00:42:00] time.

In other words, like he had affairs with women that weren't his wife and blah, blah, blah. So, you know, he, he, he is not a He's not big on the religious aspects of things, I think is what I would say, right? So he's not interested in the dogmatic aspects of things, things like that. So he's interested in trying to like, figure out what's it, why would it be important to be able to prove whether a god exists or not?

Why would it be important to prove these dogmatic aspects? Why can't we just be like, it doesn't really matter whether, you know, the underlying thing is true or not, the question is whether or not it does something productive, right? And so this kind of gets to this idea of kind of pragmatism from there.

And it's in relationship to kind of this dogmatism and this positivism where he's like, basically, he's like, y'all are just wasting a lot of time. Like, you're just wasting a lot of effort trying to prove things that don't really actually matter, because if lighting the candle makes the room bright, like, what more [00:43:00] theory do I need if what my goal is, is to be able to read at night?

I just don't need the rest of it, basically. And, you know, he's doing that within a scientific frame, so he's not Just saying like anything goes or anything like that. But the result of it kind of the result of this pragmatic view of the world is Like has massive effects on the culture of the united states which to the extent that most people don't understand what what's happening, right?

or Its impact but the entire kind of idea of american Entrepreneurism and can do ism and all these things, but it's all based on You You know, the way that Pierce's ideas go through, like, Wendell Holmes on the Supreme Court and infiltrate the U. S. culture as No, no, you should, you should, we can figure it out.

Where figure it out doesn't mean we have to have an absolute scientific explanation of it. Figure it out means we can get the truck [00:44:00] to run again when it breaks down, right? Like, that's the pragmatic underpinning of kind of American business culture.

John Willis: It's so, like, I don't know if ironic the right word or if, it's like, Most people have never even heard of pragmatism, and when I was doing research for a book, and how you walked me through some of this, and even what you're talking about right now, it really was the sort of lifeblood of the Americana, right, like this, we were this new nation, we were looking for a way to think differently, and here's this guy and this group of people, That are creating something that nobody's ever heard to that, like, in your words, I think, and I agree is that we're just a big part of what became Americana entrepreneurship.

Absolutely.

Jabe Bloom: So, like, James goes on to establish the idea, concepts, and life force that becomes progressive education based on pragmatism.

John Willis: Wow. Yeah. That's crazy.

Jabe Bloom: Yep. So like most kids who've gone to any [00:45:00] sort of progressive school at all, a school that thinks that, like the importance is that the child learns something valuable about how to be a good person in the world, right?

If that's part of their education or they learn how to be an auto manufacturer. Any of those things, like all those kind of like you know. You don't have to end up in an ivory tower to be have had a successful education. That's kind of progressive education. You don't have to end up with a PhD to be successfully educated.

Right, right. All of that's based on pragmatism. And it's, and James and others have a different idea of what that means because Part of what James and Holmes and these other people are trying to do with pragmatism is say what's important is just that we share some common beliefs and pragmatism allows the common belief to be shared without being reduced to absolute statements.

So in other words, I, I can pragmatically share belief in God with you without having to say like, Oh, my God is you know. [00:46:00] Whatever, you know, the Christian God and your God is some sort of Buddhist God, right? Like, I don't have to have that conversation. We can pragmatically just say, Oh, we believe in a higher power, that type of stuff, right?

That, that ends up being kind of James and those other guys. They're like, when you try to have an argument down to the details, you lose Certain value of community that can only be had by taking a pragmatic approach to it. Anyway, which I think is super interesting to talk about when you think about things like common ground and other things we talked about in organizational transformation, right?

Like, the pragmatic transformation would be like, you don't actually have to have everybody on the same page. You have to have everyone productively moving in the same direction, and those are not the same things, right?

John Willis: It's funny how people everybody uses and knows the term pragmat pragmatic. He's a pragmatic, but she's a pragmatic, but probably don't know the pragmatism.

So this is going to be a, a sort of [00:47:00] a rabbit hole, but I have to ask it now because it's now it's burning in me. So I, you know, prior to me saying, Oh, you know, Hey tape, can you read this book and let me know what it is? I had this sort of theory of Deming being. You know, being different. Goldratt. I mean, because they were physicists, they, they were sort of non deterministic thinkers, right?

And, and there was something beautiful about sort of Deming and Goldratt thread versus Taylorism and all that, right? And so one of my earliest presentations is, and I'm actually given it again next week, revised version is Deming to DevOps, right? And, and, and basically my thread was, Sort of it all started with Darwin.

Boltzmann basically was intrigued by what Darwin had done with biology. He tried to do it with gases and, and, and statistical mechanics. You had Planck, who sort of saw this, figured out like, you know, this is my non scientific work, but Mark [00:48:00] Burgess said this to me, he figured out the, how to measure the unmeasurable.

Einstein uses Planck. Length of theory to head to the photoelectric and and then Deming is is getting a mathematical physics degree in 1925. All this happens to spread around him, but I guess James and a person, all those guys are sitting in his metaphysical club right up there. What is the sort of tie in between that?

Because. I've always thought like linear about like this non determinist thread and then I find out he read this book I gotta figure out what this book is Allah, you come into this and help me understand that there's a whole nother part Equally important as his physics background is this philosophy thread.

So how did the metaphysical club and that they get is there sort of like they they're following this thread to the influence by it or question.

Jabe Bloom: Yeah, so there's. In [00:49:00] theory you know, or not in theory, but in the literature, there's a couple crossovers between continental and, and pragmatism, continental philosophy and pragmatism.

So like these kind of sets of physics that are based on uncertainty and or incompleteness and or probability, right? So in other words, like, We can't know certain things, right? So we have incompleteness theorems. We have uncertainty about the nature of reality that is irresolvable. We can't actually get to it.

Or or we also have the observer effects, right? Where things appear the way that the observer intends them to be. So they appear as a wave or they appear as a particle. All those things are things that are taken up within pragmatism because that's what pragmatism is basically doing, but it's doing it at a grand, like a grander non physics scale.

So basically like Bohr's saying, like, you just can't know certain things about the position of a particle. You just can't know those things. And Pierce is [00:50:00] basically saying the same thing. It's not valuable to try to know certain things. This is like, doesn't have anything to do with what you're experimenting with.

You're, you know you can kind of think about it as like the deductivist or the positives want to be like full stack physicists. They want to explain everything all the way as low as possible. And as high as possible, they want to explain the full stack. And Pierce is like, Isn't it just good enough to kind of like know Node.

js and HTML? Like, why do I have to know? Well, that goes

John Willis: back to the pendulum, right? Like the whole pendulum. Like there was a point of like, why do I need to know any further? That's what the perfect measurement of a pendulum does.

Jabe Bloom: That's right. And it's an acceptance of reality, too, right? There is no such thing as a bad engineer.

Right? There is no such, like, there's no one out there in the world who knows everything about how the computer they're sitting in front of works. There isn't. Nobody knows that. No one knows how the CERN supercollider works. No one person can explain everything about the collider. [00:51:00] There's a group of people working together.

who cannot completely share their complete sets of knowledge. But by having those people in the same place, pragmatically working together, they produce the super collider that produces information about the way that physics works, right? But nobody knows the whole thing. And so there's this, Aspect of boundedness is usually the way I describe it like that, that the human cognition is bounded within a certain realm of things you can, you know, think you could think of it as we can only see certain wavelengths can only hear certainly like, you also think of it as like, You only have so much time, you only have so much computational power in your brain, if you want to think of your brain as a computer, things like that, right?

And that means, pragmatically, you cannot know reality directly. You just can't do it. And so that, you know, Pierce is basically like, why are you torturing yourself? It's not worth it. So, if we move kind of from there towards Deming, [00:52:00] Yeah.

One of the important things, I think, that would relate both to Deming and to AI. In different ways, but the same topic is that Pierce was famous for what is called his semiotics, right? So semiotics is a is a theory of symbols, right? So traditional semiotics basically says, like, you've seen you've seen the picture.

This is not a pipe, right? So, it's a painting of a pipe and the on it. It says this is not a pipe. So, what they're basically pointing out is the symbol is not the thing. There's a, there's a There's a separation here. There's actual pipe and the picture of a pipe. And so traditional symbolism or semiotics, it tends to be these dualisms, right?

There's the thing and then there's the reference. So there's the reference and the object like this. And so there's interesting things to think about. That is the first step of thinking through that ends up being this [00:53:00] idea that AI is a symbol system. Right. All all AI is doing is manipulating relationships between symbols.

It doesn't know. It doesn't know anything about the apple. It knows the symbol of the apple because that's what's in the computer. Right. So that's traditional versions of semiotics. Pierce's semiotics, though, is subtly different because he, he, he adds a third thing in there. So there's for him, there's the thing, the object, the symbol, and the interpreter.

And so often he actually, like, it might be better to think of the way he thinks of an interpreter is like a translator, right? So. In his system, there's some active agent, usually a human, at least for him, that's literally interpreting the relationship between the two things. They don't, the symbol set does not [00:54:00] exist in abstraction, it exists in interpretation.

So it's an active cognition that's doing this interpretation, that's relating the two things. And of course, that thing, If it's human, it's fallible. And so these relationships can get screwy, right? And so to like a direct link from there to Deming would be things like operational definitions, right? So an operational definition is just an attempt to say the symbols and the objects are related in a specific way that multiple interpreters can agree on it so that the difference, the relationship doesn't, isn't unstable when the interpreter changes.

That makes sense. So he does this other really weird thing, which is that he, he breaks symbols into, into a set of three things, an icon and index in a symbol. So an icon is something that looks like the thing that is pointing it. So an icon would be an apple. [00:55:00] And the, and the icon would be look like an apple, right?

Symbol doesn't necessarily have to represent the visually represent the thing. So like the word apple is a symbol, right? It does not look like the apple, right? So that's the second set. And then the last one is. The index. So an index could be like temperature or weight or, you know, the, the color has a number to it, right?

So it's, it's a numeric symbol as you can see right there. That third one there is exactly the type of thing that we end up seeing imported into Shewhart and Deming's work where what they're concerned about is the fact that this number is a symbol Related to something that actually exists and that that that relationship the symbolic measurement.

Of an actual thing needs to be controlled by helping the interpreter explain how they interpret the [00:56:00] relationship through something like an operational definition. Right? And again, the advantage of that is, is, is you don't have to say that the number means something absolutely true. In fact, you recognize that it is the human that is interpreting what that value means in relationship to the thing.

Yeah. And so that means that. You do end up having to develop things like we talked about in the beginning, where it becomes important to train the interpreter to not just read the number as a deductive answer, but read the number as a proposition about a relationship so that they can actually interpret the relationship, not just mechanically receive it.

And so if you think about that, you get this kind of this way in which. Probability, statistics incompleteness and all those things start emerging as a new way of understanding the world in general, [00:57:00] right? So we're gonna move from an absolutist understanding of the world, God did it, or, you know, there's some sort of positivist explanation for how it happened, towards a probabilistic explanation of how things happen, which means that things don't always repeat themselves, blah, blah, blah.

In this realm, we're going to start understanding how the world works based on complexity theory in essence. Not all of them would say that back then, but nowadays I think that's roughly what we're talking about. Context matters, interpretation of context matters, context influences what the numbers mean, all that kind of stuff.

And you end up needing to train frontline workers, people who are working directly on the machines, on interpretation, on translation. Of what are you seeing? And what does it mean? As opposed to simple, rote, repetitive, like the way I'm going to measure your success is if you can do the same things over and over again in the exact [00:58:00] same way, right?

Because. What we're trying to do is search the space of, you know, in, in the case of Toyota, search the space of the complete production line and find every opportunity to find the variables that matter. Probably and reduce their variability in order to get more control of the system, but who's the people are going to see that.

Well, we want to increase the search space. We need to increase the number of searchers and the number of searchers. We want them to be closer to what's being searched. Yeah, and so then we need to train people how to be those interpreters. And so this ends up being close to the way that. Pierce would tend to think about science in general, which is he wouldn't care if those people.

I mean, he probably competitive asshole like other people can be, but he wouldn't [00:59:00] care whether or not those people could explain themselves with physics. He'd only care whether or not their explanations produce a positive outcome. Within the system, they're working, and that's what lean is trying to do.

It's not saying, hey, man, I need you to understand the physics of the bolt machine you're working on. I just need you to understand the relationships between certain material. Interactions work

John Willis: on. Yeah. I mean, so Deming would use the you know, the, the job description or cleaning a table, or you need to clean a table, right?

And without the operational definitions, it's like, what's the table going to be used for? Right? Is it a picnic table or is an operating room table? Right? So you're saying that's sort of the the operational definition, or is the 3rd sort of leg of the sort of. Pierce's stool of, of the symbols, and, and, you know,

Jabe Bloom: so pragmatism you know, like, again, we've talked about it as economics and other things, but pragmatism has a stopping function that's contextual, [01:00:00] right? And the stopping function in, in the case of like you know, this is your dinner table versus your, this is your operating table. The stopping function is different for how clean the table needs to be.

In an operating table, the stopping function goes much further towards very clean than your, than your home table, right? Right. And so that's contextually important to understand what, what is the economically viable answer to this. That's the same way of saying that is what's the stopping function for it and decontextualized stopping functions tend to maximize right so if you didn't know what type of table it was but it could possibly be an operating table you'd Say all tables should be as clean as an operating table because I don't get can't differentiate the difference Yeah, right or

John Willis: or you could say like the you know, what is the sort of perfect table?

Is that sort of anti Peirce, right? Like, let's, let's sit and study the perfect table. Well, wait a minute. Like, [01:01:00] why? What is the perfect table for, for a dinner room or versus

Jabe Bloom: a, a

John Willis: room?

Jabe Bloom: And so you decontextualize it either way. You either decontextualize the stopping function, you decontextualize the object itself, right?

So you, you take it out of its context. Whereas pragmatism says, no, no, no, no, you can, it's in context that the information becomes valuable. Yeah.

John Willis: It, you know, it appears that Shewhart was the nexus for a lot of Deming's knowledge. I mean, Shewhart is the one who told him to read Mind and World Order. You know, Shewhart obviously had statistical process control.

Shewhart was a physicist himself. You know, I'm wondering if it was, you know, sort of like, was Shewhart possibly reading Bridgman and Pierce, or was Pierce reading Bridgman, I mean, they were all around, like, like, it seems to be that they, like, they were probably, they were all incredible researchers, so they were probably all reading each other's stuff, right, and it was all sort of,

Jabe Bloom: I mean, you can't get to the world order Philosophically without having gone [01:02:00] through something like Pierce or having some general understanding what Pierce is trying to achieve and there might be disagreements again, like in my, in my general kind of opinion I think that lean.

Straddles Persian pragmatics with Jamesian pragmatics. And what I mean by that is, I think that the scientific method things that you see represented by things like the Toyota Kada, that's Persian. And the social aspects of shared information, information radiators, all these things are Jamesian, right?

That's your

John Willis: book, buddy. That's your book. That'd be a killer book. Yeah, I'd read that even James didn't want anything to do with Pierce near the end of his life, you know, like just, you know, the people who were sort of so involved. Well, I think we probably we could obviously go on forever, but I think I want to [01:03:00] sort of table and maybe do the The, the, the review of Larson's book.

Yeah,

Jabe Bloom: I think this is a good setup for the Larson thing because it's really fun.

John Willis: Yeah, it was good. Yeah. And I got, you know, there's just a ton of stuff that really, it'll be a great part to this because, you know, he, he pierces all over that book, right? It's argument for the myth of AI.

Jabe Bloom: Absolutely.

John Willis: Well, this was amazing.

I had a blast. It was good stuff. Thanks for

Jabe Bloom: having me, John.

John Willis: I was great. Yeah, we need to queue another 1 up, get a part 2 queued up. So, so what are you up to? Where can people find you? Where do you want them to find you? You

Jabe Bloom: can find me at Ergonautically where I'm hanging out with Andrew Clay Schafer.

Andrew character, man. Oh, man. He's a character. And we're, we're working a lot on helping people understand how work systems flow systems, et cetera [01:04:00] Work within some of the paradigms that we've been discussing. How do you measure them? How do you understand them? How do you create knowledge about them?

And we're also trying to play with ideas about what that looks like when you start having artificial agents in the mix as well. So, what's what does it mean to have a socio technical system where parts of the system that used to be purely technical are starting to appear to be somewhat social.

Yeah, that's really. So, I think there's some really interesting Explanations there about how to think about gen AI, consume gen AI things like that. I will be with you in New York in October. Yeah. DevOpsCon we both have workshops so people should go to John's workshop.

John Willis: Well, it depends on what you want, right?

Jabe Bloom: So we'll be in Brooklyn I think it's the 7th through the 10th, and people should come hang out with us there. That'd be fun.

John Willis: Yeah. And, you know, I'm, again, most people who, A, have gotten this far in listening and do listen to this show [01:05:00] would already know that Sasha, Jabe, and Andrew are wicked, wicked smart.

I couldn't recommend, I couldn't recommend anybody better than them to help you try to understand the big picture and just, in general. How to get improvement out of your organization. So

Jabe Bloom: thank you, John.

John Willis: Rock and roll, my friend.

Jabe Bloom: Thank you so much for having me. I look forward to talking again.

John Willis: Sounds good.

Previous
Previous

S4 E21 - Erik J. Larson - The Myth of AI and Unravelling The Hype

Next
Next

S4 E19 - Andrew Clay Shafer - Unpacking DevOps Evolution and the Future of Digital Transformation