S4 E12 - Dr. Jabe Bloom - Temporal Design and Digital Transformation
In this episode, I sit down with Dr. Jabe Bloom, an expert in design studies and organizational theory. Recently completing his PhD at Carnegie Mellon, Dr. Bloom brings a fresh perspective on the intersection of temporality, complexity, and design, particularly in the context of digital transformation in IT and other industries.
We dive into the nuances of temporality versus time, exploring how these concepts influence design decisions. Dr. Bloom elaborates on how temporality is a qualitative measure of change and its impact on human experience and project planning. This foundational concept sets the stage for understanding "timeful" design, which embraces the dynamic nature of contexts and proposes continual re-evaluation and adaptation.
Dr. Bloom challenges the traditional notion of design as a finite process, suggesting instead that it is an ongoing interaction with the environment. This perspective aligns with DevOps principles, where software development and operations are seen as continuous and evolving processes. He highlights the importance of context and proposition in design, where solutions must be constantly re-assessed to remain relevant as contexts change.
One of the key takeaways from this episode is the idea of "bounded rationality". Dr. Bloom explains how this concept, which acknowledges the limitations of human decision-making capabilities, applies to modern digital systems. He also delves into the concept of "recombining," which involves ongoing negotiation and collaboration across organizational boundaries to address complex problems that cannot be solved by isolated teams.
Dr. Bloom's insights into the temporal nature of objects and systems offer a profound shift in how we perceive design and operational challenges. By viewing software and other technological artifacts as temporal objects, he advocates for a more fluid and adaptive approach to design and implementation, one that continually responds to changing contexts and user needs.
You can Dr. Jabe Bloom on LinkedIn below:
https://www.linkedin.com/in/jabebloom/
Resources and Keywords:
Jabe Bloom's dissertation on "temporal informed design" from Carnegie Mellon University.
The concept of "three economies" developed by Jabe Bloom.
Herbert Simon's work on bounded rationality.
Norbert Wiener's work (mentioned in relation to AI and cybernetics).
John Allspaw's master's thesis (mentioned in relation to incident response and dynamic systems).
Mark Burgess' work on systems theory.
Eliyahu Goldratt's work, particularly "Beyond the Goal" audio program.
Deming's work on quality control and systems thinking.
The concept of "erotetics" in relation to design and questioning.
Phenomenology as a philosophical approach to understanding perception and experience.
Google's Site Reliability Engineering (SRE) practices.
Jabe Bloom's website: https://blog.jabebloom.com/about/
Ergonomic, a company Jabe Bloom is working with: ergonau.ly
Transcript:
John Willis: [00:00:00] This is John Willis, another profound podcast. And I know I say this all the time. Like, this is one of my favorite guests, but like, I love all my guests, but like, I mean, you know, like people know how much I've learned from Jabe. And Jabe, just Jabe Bloom, by the way, I'm going to let him introduce him.
But like, you just get smarter. Being in his universe. That's I mean, you know, I can't really describe it any other way other than just he makes you smarter and he does it in a way. I'll shut up here in a minute, but he does it in a way that just is so easy. To work with and the way he'll ask you questions and the way he'll help you learn it's it's it's an incredibly unique individual.
So. Dr. Jay Bloom, do you would you like to introduce yourself? Sure.
Jabe Bloom: Well, I just added a new prefix to my name, doctor. I, I just finished my dissertation, defended my dissertation, and got my [00:01:00] degree from Carnegie Mellon. And my degree is in design studies. So I write about and think about a lot about time temporality and complexity and the relationship between those things.
How time and complexity shape design decisions. And design decision making and prior to that work I was chief architect and CTO and a CEO of a bunch of muckety muck little things all over the place. I started a consultancy with Kevin bear for a while, and I went with John and Kevin and Andrew to work with red hat.
And that was interesting. And then now I'm working at ergonomic with Andrew and Sasha. And we're. Interested in talking to people about similar ideas about how organizational design technology structures temporality and, you know, measuring work systems can help them create [00:02:00] better outcomes.
John Willis: So, I'm going to have to be the sherpa for this, because there's 2 ways this could go.
So but the, you know, and I've had this, I've had the fortunate opportunity to you know, to, to work with Jay, especially at Red Hat and, you know, sort of getting bits and pieces of what he was working on along the way. And again, if you follow most people at this you know, I doubt there's very few people listening to this podcast and have gotten to this podcast who don't know who Jay is and you've seen his work you know, you, you, you can see a lot of what he's produced over You know, ever since I've known him, but certainly in the years of red at what platform engineering and recombining and all.
But but I think so. I'd like to start is I sort of love the, you know, the, the I, I, I think as probably most of us who are in academia, I think, you know, we hear words like complexity, but we're pretty good with those things. Like, we get complexity and like, just like, [00:03:00] like, we, like, we dance around that, you know, In many ways we work, but a temporality right?
That's an interesting like, I didn't know. I think the 1st time I heard it was like, what are these people talking about? You know, and and you're the title is what temporal informed design. Right? And and so like. What, why, why is this temporality? And I'm going to, I want to go like as deep into like all this stuff, but why that word in the title?
Jabe Bloom: So, the 1st thing we could do is like, kind of differentiate temporality from time, right? So, time is kind of like a quantitative measurement of change. Usually that's the way it's thought of, and like, one of the weird kind of things to think about that is in a system that doesn't change, if there's no change.
In that definition, there is no time. Right? So there's this weird idea that, that time, and this goes all the way back to Aristotle, time is a measure of change, yeah? And so they get weird artifacts from that if you [00:04:00] go forward into physics and stuff like that. So for instance, Einstein would argue that time does not exist in the sense that any philosopher thinks of it existing, simply because he believes in what's called the block universe, where nothing changes.
If nothing changes, there can be no time. There's reasons why he's committed to a block universe having to do with special relativity and general relativity. I kind put that aside a little bit. The second term temporality is more like a qualitative version of time, and what it means is like, how do humans experience change or moving through time?
And in particular, like in physics, there's not necessarily really an idea of the future and the past. Most equations are reversible. They can go both directions in time. So like, there's not really this idea of the future and the past. There's just kind of like the calculation. Not, not always true.
Prigogine has examples of, of [00:05:00] systems that produce different results and therefore have time or temporality built into them. They change irreversibly. But that's a relatively new thing in, in physics. So, and those are called dynamical systems. But temporality for humans involves the idea that like, I, as a human, am involved in projects, some, I'm doing something at all, all the time.
I'm like either trying to get something to eat or like writing a book or whatever. I'm involved in projects and therefore like what I'm thinking about in the present inherently involves the future. I can't, like, it can't think without some sort of future. idea of what the future is, or what it could be, or what I'm trying to achieve, or things like this.
And in the same way, I can't think at all without having this set of things that come from the past. So, and in phenomenology, it's called [00:06:00] always already. And it's like language, right? Like, you and I didn't invent English. We have English. But without English, we wouldn't be able to have the types of projects that we want to pursue in our lives or any
John Willis: communication.
Right? Let me just in general. So it's a consensus in a sense. I mean, we'll try to stay away from AI on this 1, but maybe we'll do a future 1. so you want it. I'm going to take an interesting swag. I hadn't really planned on think about this, but. In my book, I write about Sully, you know, Miracle on the Hudson, right?
And, and I think the way that temporality came up for me on there is that you know, the movie version, the real life version is actually not, the movie version is actually the more interesting for temporality because in the movie version, they look at the transcripts and to, like you said, that's a so a quantitative approach to saying, well, why didn't you go to Teterboro?
Like, why didn't you land? What do you did? And then in the movie version, he does this. Thing and what I found out in from the real story. None of this [00:07:00] is true. The, NTSB really didn't try to grill him and, you know, like, but but the movie version is more fascinating what he shows like, like, again, would stretch into Simon, you know, her assignment, but bounded rationality.
And with other words, is that a good example? Like, in other words, 1 is like, almost like a linear, like, Here's what we said you had this much of time. Why didn't you go to Teterboro? You could have saved money. The, the, the version where he says, this is happening. I've got the guy on the right hand side doing this and the bird hit the window.
And, you know, is, is, is that kind of.
Jabe Bloom: So humans, so for Simon in particular within what's going on, there's an idea of a traditional rational actor, like the one that the in the movie that the people wanted the pilot to be. Is it completely rational person? And that involves what's called global rationality.
And it's the idea that you have perfect access to information and that you can calculate without any [00:08:00] time, like you can just take the information and turn it into a decision without any time to calculate. Right? So you have. Perfect calculable, perfect information. And everything's transparent to you.
And in any real human experience, time is part of decision making in the sense that you don't have access to all the information. If only because you literally can't load it into your head instantly. It takes time to put the information into your brain to calculate something. And secondly, you can't calculate perfectly.
And you don't have an infinite amount of time. So there's like in a lot of decision making popular science, there's The idea of biasing. So like the idea that you literally just can't make rational calculations, you're biased. Like your brain takes shortcuts on purpose to make reasonable, but not actual decisions.
Well, sir, that, that idea comes out of Simon's [00:09:00] idea of bounded rationality. That's the, that because you have only a limited amount of time, you're, you evolutionarily have developed these. Heuristics, shortcuts, right? You can calculate more. But again, the secondary version of it is you, you know, you don't have, even if you could compute in your brain without bias, you still only have a limited amount of time and a limited amount of information.
And for Simon, there's a really interesting subset of why that argument is interesting. It's because, and you can think about it this way, in a competitive environment that was trying to achieve global rationality, So, and you can just think of that in economic terms as we want to distribute all the goods of, that we produce with fairness across the system.
That's what economics is asking for, right? The only way to do that is you can't do it with humans. You, you gotta increase the availability of information and the amount of computational power. [00:10:00] Hey, all of a sudden you're into, we need computers. We need faster and faster computers. They can store more data and access more data quickly.
And that is a way of getting closer and closer to global rationality. That's kind of what Simon was, Simon's project was. And it's not, not coincidental that he's at CMU when he's doing this at the beginning and birth of networks computation, right? Like this is an argument that he was making to get the type of funding that he needed to do to create more powerful computational systems.
John Willis: Yeah, no, I mean, again, I think we could go like, I think I do want to have a lot of podcasts with you on you know, all the things I'm learning about the connections between Simon and Wiener and, and, you know, just all it and how it's so relates to AI too. Right. But, but I will table that because I do want to, I think that.
Again, I I, you know, I got to sit on on Jabe's dissertation and thank you so much for inviting me. And it was just, it [00:11:00] was, it was an experience. I mean, it was just, you know, just, you know, watching Jabe listening to the questions and, you know, and it was really sort of mesmerizing really. And I guess you got to be like a geek, geek, geek to say, but it was, it was I mean, I, I, and I, you know, I page and pages and notes and, you know, as I'm, I'm, I'm thinking about You know, we talked about the temporality and complexity and but I think if I had to sort of try to summarize what I've heard, which was and like, I'm sure that you wouldn't say this in your academic explanation or but like, what's wrong with design?
Right. And, and, and like, almost like, hey, almost like what we did with DevOps or very much like we do with DevOps. Like there was something wrong with the way we were delivering software. And can we stop, like, smell the coffee, reassess, and then start. And I got that sense of, that's [00:12:00] what you were saying here is like, you know, hey, educators.
You know, maybe we need to think more about how we, and, and the reason it resonates so with me is like, when you think about like part of a sub sub part of DevOps narrative is there's no such thing as done.
Jabe Bloom: Yeah.
John Willis: And I got the sense that like you were applying that at a much. Macro level of all design with that.
Maybe I'm going to shut up here, but maybe that like, we're always focused on the delivery of something and it's delivered and that's design. We're going to design a house and the house will be there, but you were proposing that. No, it's temporal. It's just going to go on, you know, on and it has states. So, yeah,
Jabe Bloom: so there's a couple of different ways to say, like, the really simplest way that I've thought of saying what I was interested in was, you know, Creating time full design in the sense of timeless design.
So like timeless design is like if you go to a [00:13:00] museum, they will show you pieces of timeless design. It will be like an Ames chair and a Rolex watch and these types of things. And those become like in a way a way of thinking about what you're trying to do. You're trying to create something that's timeless, something that is not affected by the context of time and therefore will kind of exist forever as like a perfect thing, right?
There's all sorts of things you could say about it, right? Like the Ames chair doesn't necessarily work without the context that the Ames chair exists in. You, maybe that's a museum or a house, but like you, an Ames chair in a tent is not a particularly timeless design. It's probably going to fall apart pretty quickly.
So time full design then would be, how is it, how is the opposite true? How is it that design is in time? It's like, and in time, in like the same sense of like. Playing jazz in time, like it's actually responding to a dynamic situation as opposed to [00:14:00] trying to stabilize the situation to make this thing true.
Right? So it's like, like, maybe the simple way of kind of thinking about change. But then if you, if you take that as a basic frame, the 2nd frame that you can kind of talk about is that it's. Most designers talk about a problem in a solution, and that becomes true in a timeless frame. That can be something that's true.
I have a problem. I'm going to solve it. And once it's solved, I'm done. And I don't have to solve it again. But instead, what if we take a time full approach might be to say there's a context and a proposition. And so, what's the context, and how am I proposing to intervene in the context? Therefore, as the context changes, the proposition is constantly re evaluated.
And it may be re evaluated as being valid in multiple contexts over time, but eventually the context will change so the proposition doesn't make sense anymore. So again, if [00:15:00] you take an office chair and you think of it as a thing that takes on properties from the context that it's in, as opposed to a thing that has completely native properties, it is a good design in an office.
It's a bad design on a beach. The object didn't change, but the qualities that it takes on and expresses changes because the context has changed. So time is just like space in this way, in that, like, as context, as time moves, context changes, and there's feedback loops that are happening. And so time design is this time full activity.
And then you kind of get to this last, like, idea, which is, That design is always proposing something. It's always suggesting something. It's always asking questions and that it never stops doing that. And that design therefore is always evaluated at the [00:16:00] time that that question is made relevant to the person.
So in other words, when you walk into your room tonight or later, you walk into your office, like. Play it out in your head. It's like a little game. Listen to your chair offer a place for you to sit. It will be saying, do you want to sit here? Do you want to sit here? Do you want to sit here? It will never stop asking you that.
And when you sit in it, you will evaluate if it's, if it's holding you in a way that you think is good for whatever task you're doing. So again, like if you pull your lazy boy up to your desk, it will say, do you want to sit? Do you want to sit? Do you want to sit? But if you're trying to type while sitting in that thing, you'll be like, You're not, I'm not sitting in the thing correctly.
The thing isn't working for what it is because I'm evaluating in use. It's not an abstraction. It's not just lazy boys are good for sitting at all times. And so that means that again, it's always evaluated in use. And so it's never, it's never, it doesn't have an end. It's a, it's ongoing. [00:17:00] And this of course is exactly what we mean in DevOps and in software development often where we say.
Any, any time that we make something in software, it can't, we can't treat it as the last thing. And of course, the DevOps argument is that frequently the developers treat it as an end state as opposed to an ongoing proposition. And then the operators have to deal with all the impacts of it, people still asking questions about a thing that won't change anymore.
John Willis: That's right.
Jabe Bloom: And so that ends up being kind of an interesting set of conflicts and ways to think through what we mean by designing something.
John Willis: Yeah, and I think, you know, as you were doing your dissertation, I was thinking about some of the stuff in John Allspaw's master's thesis, right, and he talks about Dr.
Woods and about the sort of this dynamic thrashing thing, incidents, right, where, like, I, and I think even one of the questions by one of your professors was sort of asking around this, like this idea that I look at something, And then I like, okay, I see this thing. I [00:18:00] think it's this problem a and then I go ahead and I try to poke it with some device, which, you know, the, you know, so Heisenberg, it sort of changes the state of it.
So I'm actually constantly as I'm fixing it. And that's sort of, I guess, in your what you're saying is very similar. Is it's the temporal changes the context. Which is just examples like that software is really never done software is always and even to the point where like, I'm not only my constantly updating it and changing it, but it's living in different spaces.
It's like it's in a high memory space. It's in a low memory, right? All that, right? Yeah.
Jabe Bloom: And you can think of like technical debt, it works the same way, at least the way Cunningham means it, right? Like, one of the things that is so confused about technical debt nowadays is people assume that the debt is initially a decision as opposed to eventually a decision.
And what I mean by that is you can't, in Cunningham's theory, debt is caused by people making new things. [00:19:00] And it is unavoidably true, and the reason for that is that someone has an idea of what might work, and they try to do it, and by having done it, they learn something that they would not have learned if they had not tried to do it.
And then, then all of a sudden, what they made and what they understand no longer match each other. So there is this cycle of, like, the, the way my brain understands the world and the way the software understands the world has a schism. And so I can update my brain state much faster than I can update the code on a disk.
And so there's always this drift between what I understand based on having built the thing, even if that's like, I understand how to structure it better, I understand the way the user's going to use it better, I understand the way other developers would understand. All those new forms of understanding is basically that your brain has a different temporality than the code does.
The code lives at a different speed than you do. [00:20:00] And so the result of that is that What becomes meaningful to people is when my brain is moving at one speed and the code is moving at another speed and that causes like friction points in particular parts of the code. And that becomes the question I have to answer.
All of a sudden it just pops, like there's this sticky part in the code that goes as fast as my brain wants it to. So I'm going to have to go and change it so that it moves, it's like, You know, it's like you got to get relaxed again. It's got to get like, untangled. And then all of a sudden it moves smoothly with my, my temporality.
John Willis: Then
Jabe Bloom: something else will get sticky.
John Willis: Right. Yeah. Some, some anomaly or an incident that sort of like, yeah, no, totally. Well, you know, it's funny cause I wanted to like, like, I think this other piece of mental models and, and, but I was going to go ask that directly, but like, You made me think about something I just worked on recently, which is I, I was trying to explain you know, like, [00:21:00] technical debt in terms of, like, where we're at with, with AI.
And I keep trying to stay away from going too deep there, but and how, like, we've got to be careful because we're always. Sweeping stuff under the rug, right? Like he's every time we have a neck. I've got five decades now. I've been doing this right. And so I put up this slide for this presentation about how, you know, we just like the next gen.
We have this sort of promise of what it's going to be the delivery what happens. And then there's a chunk of technical debt that happens. And then we go to the next one. And now we get this the same thing. We get the new promise. The new what really gets delivered and then now this I always want to bar chart, which is showing the compounded technical debt at each next gen for each decade.
But then I was thinking as you were talking is like, I think the mental model plays in there very help because because that's a silo or a part of this too. It's not just context of temporality. It's like. We're multiple people [00:22:00] trying to work on this thing, and in the sort of grand scheme of all that, we all have, you know, like that bar chart of what I think it's going to deliver, what you think it's going to deliver, what it really did deliver, like that's a whole mental model discussion as well, right?
Yep, I think that's right. I
Jabe Bloom: think, you know, you've got multiple people with different understandings of the same thing. And, you know, again, this goes back to bounded rationality. It, they can equally all be true because they're not trying to achieve global rationality. Global rationality is everybody understanding the system the same way.
Because we know that's not true. We know that we can't achieve that based on the, Previous part of the discussion. We all have local rationality about it bounded rationality So it's like it is a it's literally the elephant people touching the elephant Exactly that and we all have different models of it and the trick is not only to realize we're all touching the same elephant but that we can interpret the elephant in different ways as long as it's useful to what we are doing and then [00:23:00] the question is like How do we make each other appear rational to each other?
So one of the really basic questions is like, how do I know that what you're doing is rational? I don't have to agree with what you're doing. It doesn't have to be right, but I have to look at you and not be like, You're just doing crazy stuff. Why are you doing crazy stuff? And so that, that kind of sense of what Klein calls common ground is an interesting way of trying to talk about how, how do people be rational about a system together?
And it's not always this like perfect rationality that people want. And that's kind of where design emerges, because design is the thing that people do when, when you can't simply engineer your way out of the situation. In other words, if you can engineer it and have relatively concrete, quantitative answers to any of the questions, you don't really need to design something, you just engineer it.
But if you can't do that, if you have to, like, balance the interactions between people who have different [00:24:00] understandings of the same thing, or, you know, make something relatively new That you don't know whether it's absolutely the correct thing to do. Well, then you're starting to tend into design. And then I think when you get, you know, like Mark Burgess and some of these other people, you know you end up if you pay a lot of attention, you end up realizing that all systems are totally indeterministic.
You can't make them deterministic, right? And this is a different way of saying, like, They're always probable. They're always probabilistic systems. It's not just our understanding is probabilistic. It's not just the way we imagine the system, the system itself. Is the set of
John Willis: universes is right. Yeah, exactly.
Yeah. Yeah. There we get into Deming, but no over the years you would talk a lot about recombining and. And this idea of like, like, how do we solve this problem? Like you said, it's a design problem. So we get multiple people. We've got to share. We have the, the, you know, tragedy [00:25:00] of common sort of idea, right?
Like, how do we deal with it? And then I was I don't know if I was surprised or was it sort of in there? Maybe it's like at a 400 page thing that I didn't get to read or I haven't read yet. But Like, so, like, is that, like, was that some part of the theme of, like, okay, we get that there are these base plates of, like, temporality, there's context, there's mental models.
We need to think differently about design. And there's some of the, this idea of recombining, and maybe you can explain that a little bit come into, like, as how do we sort of. Solve these problems, or not solve, but be better at it. Yeah,
Jabe Bloom: so I the term recombining I got from another one of my PhD friends Dimeje, and he's amazing, and I love his PhD and, so, my understanding of it and his understanding of it are just slightly different, but the way I tend to explain it is this.
In design, we have this idea that we refer to a lot by a guy named Riddle and it's called wicked problems, right? So a [00:26:00] wicked problem is this problem that seems every time you poke it it the problem itself changes and so it's like it seems impossible to solve it because every time you poke it it changes and like You can never do the problem solution thing with a wicked problem, right?
So one of the things that I try to argue in the dissertation is roughly like that's not You Wickedness, that's not bad. That's design. Design is that. Design is not resolvable. Like, design is this ongoing interaction, ongoing negotiation between parties. That's what it is. And recombining is that activity.
And so if you think about that kind of in like a, It frame or like what, what, what we do with dev and ops and stuff like that is the traditional way of solving problems is to kind of like slice the problem in half and give responsibility for part of it to one group and part of it to the other group.
And with the assumption that if I slice it the right way. These people will be able to solve that part of the problem, and these people will be able to solve [00:27:00] that part of the problem, and then it's not a problem anymore. And recombining is just a way of saying that that doesn't usually work out very well in the long term.
Someone ends up holding the bag, usually the operators in IT, they end up holding the, The problem doesn't actually get solved part of the aspect of it and recombining is just an activity in which people come to realize that not all problems can be resolved by either global ownership like the a centralized IT department can own the problem, nor can they be owned by completely autonomous teams.
There's a set of simply can only be solved by people sharing the problem. And and solutioning and that's recombining is figuring out. Hey since most it organizations only have these 2 modes of thinking about it, either we give it to central it to solve, or we give it to small autonomous teams to solve.
We don't have a lot of practice with this activity, which is [00:28:00] constantly renegotiating the boundaries and ways in which we have to share certain parts of the infrastructure and resources. And, and that because we don't have those skills, we don't know how to do that activity very well in most organizations.
We, we. Have economically suboptimal outcomes because we either over govern or over own problems and those don't produce the most economically viable outcomes is kind of like the argument for commenting is,
John Willis: and,
Jabe Bloom: you know,
John Willis: and I knew that worked its way into your 3 economies, right? Like, and, and I think that I get that this was like, real helpful for me because, you know, I think I always thought about like the, you know, the, the sort of two economy, which is the way we think, right.
Or the central, but mostly DevOps is sort of built on, on Andrew's, you know, Andrew Clay Schaefer's, you know, wall of confusion, right? Like the sort of a two economy. And, and then even the early Google seemed to be [00:29:00] like, you know, the Borg and stuff was, You know, I, I think you at least still thought like, it's still either or right?
Like, and like, we're almost in a fantasy world, like ignoring there's something in the middle. And what I loved about the scope economy that you had built is no, let's point it out. There is the shared boundary that has to be scoped between the differentiation that sort of, let's just call them developers and the, you know, and the scale economy, you know, it is the operators, but obviously it's, it's more than that, but, yep.
Yeah,
Jabe Bloom: those end up having, you know, again, they end up having different temporalities, like the time and the time and temporality of developers is very different than time and temporality of operators. Operators tend to own things for very long periods of time. That's right. Sometimes they own, you know, infrastructure that's 20, 50 years old.
Developers never own something that lasts for 50 years ever, right? They're very promiscuous, right? They like, [00:30:00] they like to be monogamous. Occasionally, like, they do one thing for a little while and they move on to the next time. And they, they're not into long term commitment. The operators marry, marry for life.
They're like, yeah, like a naked mole rats. They pair for life. You know, developers are, you know, for instance, they move around. We're
John Willis: trying to change that in our industry, but yeah, but
Jabe Bloom: yeah
John Willis: But so then the other thing I thought was really interesting and that you sort of have to grind me on that ground me back to like how this all connects to, and I kind of do know, but I want you to sort of help me better understand is one of the things that I, that you got to talk about is.
Using this idea of the apple and you, when you explained it to me originally, you, you sort of, you described this idea of erotetic method and then that's why I like, I'm so glad because I would have like, I think a lot of people were scratching their head when you went through this and I think you have [00:31:00] to at least if you're, if you're sort of human, like you're not, you're not the unicorn, you have to hear at least the 2nd time to get it.
But I, I love the way you use the, all of this. Grounded by this, I'll let you sort of take it from here. But the way you describe how we view an apple.
Jabe Bloom: Yeah, so, in phenomenology, there's often a reference to something like an apple or a building. And the idea in phenomenology is this weird question, which is to ask like, If you're looking at an apple, even if you don't move, the light's shifting constantly there's other people walking around, stuff like that.
So the app, the image that you get of the apple is never quite the same image from one instance to another. And it's even more amplified if you walk around the table. You see different sides of the same thing. And the, the kind of question ends up being like, how does your brain know that that's the same thing?
It's just getting different images all the time. So A, it can slice some subset of the image out, the, the apple, and B, it can walk [00:32:00] around it and, and keep the thing in mind. And that's what we call an intentional object. We intend the towards the apple. We, we, there's an apple ness to the apple that makes the apple stand out from the table, even as we walk around it.
And one of the ways to say that is the intention is like futural in the same way we were talking about projects earlier. And what I mean by that is that. If you looked at an apple, and you said, that's an apple, and you walked around to the back of it, and it was hollowed out, and it appeared to be made out of papier mache, you'd be like, oh, wait, that's not the apple that I thought it was, it's a fake apple.
So, you intend towards an object because you have a certain set of expectations about how that thing will be. And when you determine it's not the way it was, you're surprised, oh, interesting, it's not what I expected, right? And so, There's another way of kind of thinking about this that so that's a little bit of tininess in in perception, but there's another way to think about it.
It's called a temporal object. And what we mean by that is [00:33:00] like, it's a song. So if you listen to a song, you can have an idea of experiencing this whole song, but you can't experience the whole song. At 1 time, it takes time to experience the song, right? So, that's a temporal object. It's an object that only exists because.
Time passes. And so one of the strong arguments they make in the dissertation is that it's a confusion to think of a song and an apple as being different. Apples are just as much temporal as a song is. Yeah. So in other words, the same thing with what we were talking about with software. It's a mistake to think of the software as being an apple on a table.
If you think of an apple on a table as being different than a song. The software is a song. It's a temporal object. It exists in time. It does not exist at any point in time. So then, you know, there's these secondary things that happen because the, again, the objects take on meaning and [00:34:00] properties because they're temporal, which has to do with, like, what kind of questions can you ask about the apple?
That are valid based on the context you're in. So, this is what's called arictetics. And just to give you an example. You could ask a question about a thing that you intend as being a real apple. You could, you could say like can I make an apple pie out of it? Can I eat it? Can I whatever, you know, do whatever with it.
That is about real and there's certain valid things you can ask about it, right? But if you walked around and determined it was a fake apple, it would literally change the valid questions that you could ask. You couldn't, you could no longer ask, can I make a pie out of it? Yeah, so questions that you can answer about a system or ask about a system, there's a set of logic involved in that that's called erotetics.
And it basically is, how do you know that a question is, is reasonable and valid to ask? [00:35:00] So that has something to do with the context or the background knowledge that you have about the system. And then the interesting thing is that there's a secondary loop, which is when you answer a question about something in a particular context, it often changes the context itself, which changes what other questions are valid.
So there's these loops of How do we ask good questions? How do we know what a valid or an invalid question looks like? And, and how do we know what questions are invalidated when we answer certain questions, which is a very different type of rationality than propositional rationality, which says, like, if X, then why this is more like, you know, if X, then these questions are still valid, but those questions aren't valid.
So it's like a filtering mechanism about what you can be interested in and curious about in a reasonable kind of way. And of course that that has something to do with the, the boundary rationality stuff we were talking about as well, which is. [00:36:00] What's reason what kind of questions are reasonable to ask, given the nature of our human experiences, what we can know, what we can do as groups, all those types of things.
And so erotetics has to do with asking better questions. And understanding how to ask better questions and really noticing that part of it. And again, in the sense that I kind of said design is proposing and proposing is this process of having the objects ask you questions. Objects are whispering to you.
Do you want to sit? Do you want to sit? The question is, how do you make the objects make better questions? How did it make them better proposers of things, right? And so you can think in a sociotechnical system. That means how do we get a system? To ask us better questions about its current state to, like, literally help us understand itself better.
Like, how does it propose to us in ways that we go? Oh, yeah, we should be paying attention to these things [00:37:00] and less paying attention to those things. Because these are questions. These are uncertainties about the system that we can assume are unresolvable, but we still have to pay attention to. But it is.
John Willis: But so this is where I'm so this is fun, right? So the so, but so the, so I don't see, like, it's bidirectional because in one ways I think of it is we're looking at something and we're going to ask questions and we're going to sort of, and there's some science and I'd like to, like, what is this next level without going too deep of what is the science behind, like, Figuring out or getting to the, what are the right with the filter, if you will, but what you're also saying is, it seems to be, and I'm sure bidirectional is the wrong phrasing, but it is also sort of asking us questions.
So, so there's like, this both sides of it, but that's sort of 2 things is like, how do you make me understand the bidirectional nature of the chair asking me questions or me asking questions about it? And then what is. You know, how can [00:38:00] you take me to the, what the next level of the science would be to sort of get the filtering process working.
Jabe Bloom: So the 1st thing is to say the, the, the way that I, the, the simple mnemonic that I use for this is what we attend to eventually is what we intend as and so simply where you say that is like, when you pay attention to something, you learn to think of it as in a certain kind of way and therefore, When you reapproach it later, you already have ideas about what it is.
And so you basically have this feedback loop where the thing becomes more and more like what you expect it to be like, because that's what you're, it doesn't have to be true. It's just, you're not paying attention to all the other signals that it's emanating that don't have to do with. That's an apple, right?
And so the interesting question ends up being like, how do you break that loop and, and point out to people, hey, [00:39:00] like, just because you think that's an apple doesn't mean you have to keep thinking of it as an apple. You could think of it in different ways. And by thinking of it in different ways, you might discover different things about it.
So in my, like, presentation, I said, like, If you went to a baseball field and you walk up to a pitcher and you give him an apple very easily, he'd go, Oh, this is like a ball. And he'd throw it down the lane and that because by changing the context of the apple, it appears differently to the person that's observing it.
It changes how they intend towards it. Right. And so what a way to say that is like. This is exactly what it means for a developer to go sit in operations for a little while. They go and they go, oh shit, the way that I thought of that thing is not it's not complete. It's a whole different way of looking at this thing.
It's not just an apple. It's also a ball. And that, so because, so is, is it like then, is this very sort of similar to the cous stuff, like the ladder of inference, like forcing ourselves so that [00:40:00] the science then is the science of basically kind of breaking out a bias and loops? That's right.
Noticing, noticing that you have bounded rationality and not global rationality.
All of a sudden going, oh wait, these people view the same thing I'm looking at. Differently and not and not invalidly their their view is also equally true. It's just different than my view,
John Willis: right?
Jabe Bloom: And oh now all of a sudden we have you know in in academic terminology We have a boundary object. We both share the apple, but we have different ideas about what to do with it or how it works We both share the piece of software.
Developers see it as a place to put new features and information and blah, blah, blah. And the operators see it as something that has to run and update itself and be maintained. Same object, different ideas about what it needs to be able to do, different questions, what they had, the object has to answer different questions, it's got to point out different concerns.
So for an operator, it's got to point out like, [00:41:00] you should be concerned about how much disk space I'm consuming, you should be concerned about how much network I'm consuming, whatever. For the op, for the developer, you should be concerned about how many people are coming, you should be concerned about how many people are buying things through me.
It asks different questions, it proposes different ways of interacting with it, and what I think happens in a lot of organizations is that just one, one view of the system dominates, and it usually is the developer's view, it's the, it's the consumer facing, consumer view. And so everybody in the organization is so busy trying to see the system like the customer that nobody is trying to see it as, you know, the developer or the operator and realizing that those are equally important consumers, users of the system.
And therefore, you're not asking the right questions all the time. You over ask questions about the customer. You under ask questions about these other things. And, you know, you can see this play out in different ways, like [00:42:00] Google's idea of reliability,
John Willis: initially
Jabe Bloom: was based on the idea that reliability is a customer quality, but it required someone to put the lenses on and say, Oh, you shouldn't think about this as an operator.
This idea of reliability actually is a quality that's important to our customer. How do we understand that better? How do we do stuff about that?
John Willis: So that's what I was thinking is in a lot of ways. The reliability is that common responsibility, right? That's what what I think when you think it was brilliant about your 3 economies is, you know, when I started thinking about even Mark purchases, Yeah.
You know, I would say, but Mark Burgess told me 1 time, you know, he wrote the forward to the 1st book. He said that 1 of the brilliant things that Google did is they made a non deterministic world look deterministic developers. And that was that was sort of a scope economy, like that shared responsibility to figure out a way.
And I guess that's an [00:43:00] erratic method if listen to me, an erratic method of basically you know, that so in very simplistic terms, SRE, like you said, it probably was originally focused on how to create reliability for, but over time, it really became the shared responsibility and that's why we, we all think SRE.
In the right delivery mechanisms, not the clone and copy versions is the is really solves a lot of these problems because we, and I guess this gets into the other part. I really love is the whole constraints make flow right? Where, like, if we think about a scope economy, how do we create, like, you've got to give Andrew does a really good job of explaining this right?
Like, like, we have to sort of We have to give a little get a little that's not the way he says it, but like, how we sort of are the selfish. What does he
Jabe Bloom: say that ongoing negotiation of selfless concerns? Yeah.
John Willis: Yeah. Yeah. Yeah.
Jabe Bloom: So, yeah, [00:44:00] yeah. Yeah. And you can think about exactly as you pointed out, like, 1 is they shape.
What questions are valid for the developer to ask, like the really stupid, simple version of it is, should a developer ask, how do I get a machine? How do I get hard drive space? How do I get a network connection? How do I get storage? Are those things that you want your developers to ask? Or do you want the developers to say, I have a thing, I need to deploy it?
And different contexts, the answers to that would be different. Like, I'm not saying that there's one way to define what questions a developer should ask, but the, the shaping of the questions, the questions that are valid to ask is like the shaping of context. It's the shaping of what, how does the context validate or invalidate certain types of questions?
And I think, you know, In high performing organizations, one of the things that happens is that you shape a context in a way that the the questions that the developers trying [00:45:00] to answer are primarily about the customers, the users and not about things like how do I manage infrastructure? How do I,
John Willis: yeah, or the sort of delivery of what, you know, in the case of that, you know, time design.
I, you know, I remember having a conversation with Kelsey Hightower long time ago, and, you know, start off like, ask me why I would, how come I don't work at Google? And I went into like, well, first, I never passed the test, but, you know, but I, I, I'd last 2 months there. But the, where we, where we got to, which was, you know, what I loved about, like.
What I saw SRE being really valuable was that sort of negotiation and I, out of that conversation, I came up with this, like, the thing I love about SRE is, so, you know, you're, you're the developer on SRE, you come to me and we sort of agree on some agreement of the sort of boundaries, right? That how we're going to, like, your SLOs and, and, you know, what you're going to get and, and, and, you know, even internally what you're going to [00:46:00] pay.
You know, when you use three nines, you're going to pay two million dollars or whatever in corporate money. And then you start, and we're like, we got everything, like, all right, we got it, Jabe, this is what we're going to do, this is great. You're going to do these things with your code. And then you say, well, John, but are you guys going to use, like, GCP or AWS?
Are you going to use Composable Info? I'm like, Jabe, you don't get to ask those questions. Yep. You define a set of qualities of the system. I will provide those qualities. I mean, you need three squirrels and a wheel. And you're getting what you, you know, what our transaction is all about, right? And so, yeah.
Jabe Bloom: Please don't try to answer my questions. Those are my questions to answer. Oh, yeah, absolutely. That's really cool. So, and that, you know, that creates again, that, that, that idea that you can define the qualities is supposed to specify the quantities, right? You can try to describe a system qualitatively and even have measurements, but not specific [00:47:00] implementations.
As an architect, I just want to be like, this is the most like basic idea on some level. This is just abstractions. This is just don't don't expose an interface. Don't expose an implementation. And like, it's so confusing to me sometimes why people have a hard time understanding that that's exactly what a platform is trying to do to some level to some level platform saying.
I'm going to expose an interface to you that provides you the methods that you need to do whatever the hell you need to do. Don't ask me about how I implemented it because then you'll overbind to the implementation. And then what about changing the implementation later? You'll be stuck. You'll have to change a bunch of stuff that you didn't intend to do.
To be bound to you'll stick me, I'll get stuck
John Willis: with somebody's going to get stuck. Right. Right. I mean, I eat technical that right. But but that's funny because you're right. Like, anybody who's been doing this for a long time and still has a problem understanding, you know, it's part of us, like, seeing the same.
Like seeing patterns as opposed [00:48:00] to sort of implementation, but like, you know, if you go back, even if you were just not sleeping through the somewhere between 85 and 95, you, you understood the difference between interface and implementation. Right? And now you're right. Not for some people not to grok that now at a platform is kind of ironic.
Right?
Jabe Bloom: It's just, it's just, it's like. Interface implementation on grand scale and they just like they haven't reprocessed what that's supposed to be doing to some extent at that level. That's pretty cool.
John Willis: So what's what's what's next now? You got because I mean, it's always like you know, like, hey, can we do the old man?
I'm still working on page like 325 of my 400. you're like, you're like, for almost all the time. I've known you within the last, you know, last 5 or 8 years. Like. Yeah. The subset of every conversation was, I'd love to do that, John, but man, I've got to get this thing done. But now you must have, you know, feel like this, A, incredible freedom, but then what, what is the sort of like, what fills the gap for [00:49:00] you on all that, you know, to sort of, or just take some time off?
I,
Jabe Bloom: I can't take too much time off. But I, you know, if only because I feel like I, I, It's like the it's it's the reward thing. Like, I actually finally got a taste for being successful at writing after many, many years of, like, being afraid that I wouldn't be successful if that makes any sense. So, I, like, I think I need to probably write a book next.
I'm trying to decide what that will be. I think it might be. So I think there's two versions of it. I'm either write a very philosophical design theory book that will be read by a couple hundred people or, or maybe I'll write a book about the three economies that I think would help me introduce these ideas in a more abstract way, but to a larger audience and maybe have more impact that way.
So I'm really, I think it's the three economies thing that I'll [00:50:00] probably pursue more. You know, I'd love to get like, So my dissertation is 400 pages long and nobody wants to read a 400 page business book. So I wonder if I can get like a 80 to 150 pages of of clarity that would help people understand the ideas a little bit better.
Really cool. Because I think it's like, often it's in, it's, it, one of the things I love about the idea is that it fulfills one of Goldratt's things, and Goldratt says like, a really good idea when you say it to somebody, they'll go, Oh, yeah, that's right. Why didn't I ever think about it that way?
John Willis: Yeah. It just makes sense.
Yeah, my favorite thing in Beyond the Goal, and I've listened to it so many times, I've got it kind of memorized, but he's talking about how do you get a Nobel Prize in physics, and he said, and he uses his accent, he laughs at his own jokes, and he says, he says, You write a two page memo. And when every [00:51:00] other physicist in the world reads it, they go, Oh, shit.
Jabe Bloom: Exactly.
John Willis: Yeah.
Jabe Bloom: I mean, like, people get that initial flush of like, All right, this is right. The question I think then ends up being, What's the implications of understanding it? And that's, I think, needs to be detailed a little bit in the book is like, Okay, like now, like, yeah, this makes sense. I get it. But like, what do I do about it?
John Willis: Yeah.
Jabe Bloom: Yeah. And so I'm interested in creating a book that at least helps people start that exploration so that we can have more research being done about the ideas so we can discover more about, yeah, I think there's
John Willis: so much that I told you, I was, I got, I got knocked off the network, but you guys are doing one of your live things and you saw them.
And you know, I had gone back to a presentation you helped me write at Red Hat where, you know, where I was trying to think of, like, the new platform, and it was something for one of Gene's conferences, and you, you, you helped me through, like, you know, Alicia Guerrero, who was on [00:52:00] there, and, you know, and, and then obviously Ashbery, right?
And And like that whole idea of constraints create flow, right? Like, and it's so, I love those counterintuitive things, right? And I think that there's so much there between your, you know, three economies of scope economy, that some abstractions are terrible. There are too many abstractions are terrible, but like, what's the right temperature of abstractions?
And then the idea that you, that like, and I think a lot about this now with, you know, to end up, maybe we'll do another, I definitely want to have another conversation. Cause I really think right now that. One of the things that everybody should be asking about, you know, I worry about technical debt and I like, I really worry about this because this is a complex domain that we're not, you know, we thought like, like cloud cloud at the end of day was network compute storage.
So even though some of the abstractions scared us originally, we got over the hump really quick. Oh, it's just a virtual machine. Oh, that's just network block story or [00:53:00] this is a domain that most of the people who support or protect the fort. All non deterministic, all code that's not written in the same way, the way, the logic behind it, you know, and all that, it really doesn't fit our patterns.
And so not only is it going to be exponentially more, you know, it's going to be, you know, exponentially more complex to the people who are going to have to support it. And I think it would be, you know, maybe I'm a meat and potatoes person, but the more I think about it, it's like the first people that should be having a discussion with.
Everybody who wants to bring in a copilot is let's get SRE up to speed so they can say, Hey, I really will only support this if you have two vector databases, if you use these two versions of a line chain, if you don't like, I think that conversation has to start yesterday.
Jabe Bloom: I think that's right. Yeah, I think, again, having the right set of constraints in place, especially for a [00:54:00] system that.
Is, is in, in non-deterministic and therefore, you know, is all systems are non-deterministic, right? That's right in the beginning. But these just are inherently more non-deterministic in your face than other ones. They're assumed to be non-deterministic as opposed to like a car. Is generally thought of being deterministic, right?
Like, it is indeterminate at some level, but you should be able to get into it and go someplace. It is
John Willis: temporal state. It's not.
Jabe Bloom: That's right. Exactly. But these things are intended to be, like, produce novel results almost constantly. That's what they're supposed to do. That's kind of like the way they work in a way.
So, you know, it makes me think of again, the stuff that we've talked about a little bit before in the past, like. To what extent should we think of it in like the three wire mode that people thought about launching the original rockets. You should run three systems. You should have all three systems respond to the same thing.
And they should all be within the city. If 1 system [00:55:00] produces something out of the ballpark of the other 2, you should be like, what's up with this with this version. And be able to like constantly be checking that the systems are producing within a reasonable range of variants instead of like just incredibly broad variances.
To the extent that you want to produce a system that's in control. So, like, you know, we could just go right back to, like, some of the Deming's. I can tell you,
John Willis: like, I hadn't really thought about, like, I've been thinking about SRE, but, like, you know, I've been doing a lot of work with, what they, you know, like.
Observability is, like, has a whole, you know, people talk about LLM observability or a model observability. It's, it's not like the honeycombs and dynatrace. It's actually you know, correctness. It's bias monitoring. It's, it's relevance. It's hallucination management. And you know, I think
Jabe Bloom: those are all really super interesting questions to ask again, like the really simple high level version of is how do I know this thing's in [00:56:00] control?
How do I know I'm in control of this thing? Right. And that's the question to answer.
John Willis: And we know, you know, like if we, you know, if there's some consistency, is Shewart's control charge is basically, probably to date, the best way to know if something's in control or not. Anyway, we're about at the hour.
I definitely would like I think we'll, I'll get, you know, I'm sure you'll be up for it. Well, let's do an AI because I want to go back to Herbert Simon, AI, Nobu Wiener, what I've been learning, what you've learned over all your research, and some real fun on comparing notes on like, how do we get, and then our whole first conversation started with me talking about my experience of having these sort of retrieval and dialogues with chat, you know, not just chat GPT, but building my own corpus of data.
And that's where you got into the errathetic. And I think this Fascinating stuff there, but everybody knows where to get you and I'll put it in show notes, but tell everybody how they find you.
Jabe Bloom: Sure. You can find me@j.co. My [00:57:00] blog is currently down, but you can see a nice picture of me. I'll fix that soon enough.
And you can find me ergonomically so ergo, ERGO Nordic, NAU. L T Y, L Y dot L Y, so ergonomically and you can find me on Twitter at Cytane, although I've used that less and less and yeah, send me an email, it's easy enough to figure out if you go to Webster's.
John Willis: That's Dr. Saiten, by the way.
That's
Jabe Bloom: Dr. Saiten now. That's right.
John Willis: All right, my good friend. Thank you. Thanks, man.
Jabe Bloom: All right. So nice to see you again.