S4 E1 - Adam Hawkins - Seeking Quality in the Digital Landscape
I have an enlightening conversation with Adam Hawkins in this episode. We dive deep into W. Edwards Deming's seminal perspectives on quality.
Our dialogue explores how this expansive framing of quality requires connecting producers and consumers in a broader system. We relate these concepts to service level objectives (SLOs) and their role in quantifying acceptable loss balanced against keeping systems valuable. This leads us to the profound realization that quality is contextual, varying across different systems and business needs.
Among other topics, we touch on the intricacies of variation, Deming's red bead experiment, the challenges of measurement and pragmatism, bringing quality thinking into software, and the difficulty of conveying Deming's multifaceted ideas to novices. Our exchange centers on constructing shared mental models to apply Deming's deep knowledge in today's digital landscape.
Adam’s LinkedIn can be found here:
https://www.linkedin.com/in/hi-adam-hawkins/
Resources and Keywords:
Adam Hawkins' podcast: Software Delivery in Small Batches
Deming’s Journey to Profound Knowledge Book by John Willis
Bill Bellows' 14-part series on the Deming Institute
"The Art of Business Value" by Mark Schwartz
Google's SRE (Site Reliability Engineering) book
"Quality the Japanese Way" (book mentioned, author not specified)
"The Birth of Lean" (book mentioned)
Taichi Ohno's books
"The Toyota Way" (book mentioned)
DevOps Enterprise Summit (conference mentioned)
Two-part podcast episode with Bill Bellows
The Deming Institute (organization mentioned)
"Out of the Crisis" by W. Edwards Deming (mentioned as not being the source of a specific example)
Transcript:
John Willis: [00:00:00] Hey, you know for those of you do listen to my podcast, you know, I'm not well organized, but that it is what it is. And I, I tend to, you know, try to strike up conversations when I think they're, you know, it's like I was joking that like, sometimes I get on a call with somebody I'm like, we should be recording this.
And I I've got a guest here that you, you must know by now Adam Hawkins. And you know, one of the things I wanted to say, which is interesting about this whole, the way. We communicate these days online is I think in any other universe we would have never met but the gravity of the way we think alike has brought us to be have sort of like more conversations that I would have ever expected.
And you know, and I think I've been on Adams podcast a couple of times. We'll put that in the show notes. But, you know, he was instrumental in that, you know, and give me incredible feedback from my book about Deming. And, and, you know, he does, you know, like [00:01:00] you sort of, you know, we, we, you know, I'll say something and he'll write something about it.
You know, it's just, it's really been fun to interact with this gentleman. And the reason we're talking today is for those who listened to the two part Bill Bellows Adam in his, Insanely awesomeness wrote a whole blog article about that and then which was like made me think again about like What what transpired on that like again?
I think everybody who listens to the the two part Bill Bellows and understood the pill bellows is like Incredible. And so anyway that's why we're here, Adam. I, again, I don't think you need an introduction, but go ahead and introduce yourself. Well, thank
Adam Hawkins: you for that generous introduction, John. And yeah, I agree.
Hadn't, if it not for the internet, I don't think we would have met and be able to collaborate and have the kind of conversations that we have had. So definitely grateful for that. So for the listeners who don't know me, I host [00:02:00] the small batches podcast. And do a substack called Software Kaizen. So on the podcast, I do some interviews and sort of short form, like five to 10 minute episodes introducing topics like lean, continuous delivery, DevOps, like lots of stuff on software delivery, heavy focus on Deming and then write longer form stuff on the substack.
So, yeah, so like to. To bring it back to what you said, John, right? How we kind of got here about the conversation with, you know, with Bill Bellows and listening to the two of you talk, what piqued my interest there was when I noticed that the, like your mental model of like Deming, we were talking, you're talking about statistical process control you know, precision versus accuracy and Bill's talking about it.
And I noticed that there was. Different mental models at play and the listening is okay. This is great because it was An opportunity [00:03:00] to learn about something that, like, I didn't truly understand at the beginning, but listening to the whole conversation, like, at the end, you know, I was kind of thinking of what something Jane Kim said recently is like, I don't know exactly what you're all talking about.
But I can tell it's highly important and I want to learn.
John Willis: I love that. Yes. Right. That's right It's a that you need to know I you know, I think it is I I mean like I I guess people sort of yell at me because I i'm self deprecating a lot a lot But you know, I think we all have imposter syndrome and so, you know, I know i'm not a dumb person but i'm always fascinated by the ability my Sort of luck if you and again, I don't think it's luck But to be able to have conversations with people who are so freaking smart, you know What I mean and and be able to sort of hold my own but you know Like the mark Burgess is you know, we don't go on a tangent here But mark Burgess is one of the smartest people I know and I've had Incredible conversations with this person who if you told me when I was young I was gonna have [00:04:00] conversations with you know a physicist that got that his advisor when he was getting his PhD that he was Basically too smart to go into quantum physics, you know but, but Bill Bellows is sort of like that kind of guy.
But I, I think that the other thing to, just to set the stage of, I, I think that I've always considered myself a great crash dummy. You know, on all things, right? And one like something I do very well is when people send me their product. I'm the classic person who works in a bank who is just a little above being a good technician, but will make a lot of stupid mistakes.
So I, I'm phenomenal. But don't send me your product. I'm too busy, but but I'm phenomenal at giving me your sort of your, you know, give me your how do you install your product on your public Internet? But I think I'm the same way to like, when I'm when I'm working with Sydney Decker or or Mark Burgess.
And you know, now I found sort of another sort of. [00:05:00] Comrade, if you will, which is Bill Bellows, who's willing to sort of sit with me, give me the respect to answer the questions. And, and I might not always ask the questions in a way, you know, again, it goes back to mental, it's all about, I love how you characterize that conversation as, you know, starting out as conflicting mental models.
And then the battle, you know, and a friendly battle, but I mean, I was battling. I knew I had a point here. Like, why am I not getting this through? But like, what, so where, where did, where did you see that expanding? Well,
Adam Hawkins: I mean, how I noticed it was that the conversation slowed down and that like you kept asking different questions and ended up, you know, Bill would say something kind of repeating, but slightly different.
Right. And. So, okay, all right, they just keep going and kept iterating, kept asking the same questions. But I think [00:06:00] it's also kind of the fault of some of the metaphors and where people put implicit goods in, right? So, for example, you know, Bill bringing, you know, the conversation The two part, episode just centered around Toguchi and his work and quality and Toguchi, Toguchi loss function and was it like the fact like multivariable factorial methods or whatever?
Yeah, yeah, yeah. Separate, but. That's a whole other list of discovery. But, you know, he's bringing that and thinking already kind of down to that question, too, which is, you know, how many different ways can something meet requirements than as like, as a connection to the Taguchi loss function, and I felt like you were approaching it more from here's Deming, here's statistical, man, statistical process control, right?
And. The focus then becomes reducing variation. But the point that Bill was making is that reducing variation is [00:07:00] great, but doesn't mean anything unless you're actually meeting the requirements, you know, like you can have a system that is like has no variation at all, but it's doing nothing of what's useful.
And it took a while for then, for that point to come out, right?
John Willis: And that is sort of the part of like me being able to articulate because I, I listened to his, there's a 14 part series on the Deming Institute with him, which is just brilliant. Right. And so I was pretty prepared and I understood his, you know, for those who didn't listen to, I don't know why he'd leap to this one first and not listen to Part Bill Bellows, but, but he describes sort of the simple, like there's really two types.
One is it's black or white. Like, does it meet specification? Yeah. And if you understand, says cross control doesn't fall within the control limits or not. Right. And and, and a lot of criticism of six sigma is sort of that too. Right? Like, that's good. But like, and then the 2nd [00:08:00] is then the over question about like.
And, and what he called it was, you know, precision versus accuracy. Right? And, and so precision is you know, like, is it sort of meeting the requirements? Are you reducing variation? And I think his assumption, and maybe your assumption too or not, is that like, I'm so deep in the understanding of variation and Deming, because Deming really doesn't, you I mean, he talks about Taguchi like in, you know, in like a second edition or, or, you know, where he added a chapter on it.
But like if you read most of this stuff, it is about variation or what, what Bill would call precision. But I knew what he was getting at in terms of accuracy. Now I didn't understand the reason I asked him to do. That's a good question, because I wanted to understand deeper. So my questions, which sounded weird, which is like on the dartboard,[00:09:00] like the dartboard, like if it is about like his, his sort of metaphor of the dartboard.
And what I was trying to say is, hey Bill, I do understand this accuracy thing, but like You know, is, couldn't we use the, the red bead game, for example, as an accuracy, and he's like, no, no, that's just sort of variation and that's precision. And I, and we took a long time for me to get to where he kind of said, oh, I see your point, which was the point was, if all the beads are white.
Which by the way, never will happen. Right? Never happened. Yeah. But let's just say it's, it's such a, like a small number that you really have nailed it from your, you know, from your suppliers and all that stuff, right? Cause that's the real story behind the red bead game, right? Is why do we get red beads in the first place?
Right? If we could get there and then we said, well, okay, well, like, you know, the, the, the ROI improvement at this point is probably not worth it. Right. What do we do next? And what I loved is he said, you know, most people [00:10:00] who answer the question. So I think he thought I was like most people, right? Like his students, because he asks the question, like, what next?
And they say, well, let's make them faster. Let's make them cheaper. And that's not the Taguchi thing. The Taguchi thing, which I was like, like I said, you Even I think you didn't realize I was trying to get to that. You see, if you get higher quality beats. Yeah. And, and what I loved about your analysis of that is like, that was sort of an aha moment that you got from bill, not from me, but I think it's because I, I like wouldn't give up, you know, I was like, you know, I was grabbing onto his leg and not letting go is you characterize that as like an aha for you, which becomes this infinite moment.
Now it creates not just the sort of stopping point of, of precision. Now you create this level of sort of infinite opportunity for improvement. And I thought that was a really cool observation on your
Adam Hawkins: part. Well, I think that's where the real value in that conversation is, is that, you know, to [00:11:00] my earlier point about sort of the challenge of the metaphors is that they had.
They frame the thinking a certain way, like they're nationally trying to simplify the problem, you know, removing some of the bakeries and the nuances which open up the remaining questions, right? Like, you can kind of get there with a dartboard accuracy because the dartboard example of precision and accuracy because, you know, you have to sort of change it a little bit because there's multiple ways a dart, a dart can hit the board, right?
And that gives you like in the center of the bullseye, the exact center of the board, there's many ways. a dart can hit a board. Whereas in the red bead game, and that's, I think, where the challenge became, is Bill is thinking, I think, in more implicitly about n number of ways a thing can meet requirements, right?
But then you adopt a metaphor where there's only one way something can meet requirements, that's the red bead game, or the bead's white or red. Then you like really push on that example all the way to the [00:12:00] end, right? You get to that question. That question is, you know, everybody listening to this, to this podcast is probably thinking in terms of, you know, continual improvement.
What can I do to improve the system? Well, then, you know, hypothetically, what do you do if there are no more red beads and you're only have white beads? Well, then the question becomes, well, how do you make a better white bead? How do you do that? Right? You know, and then Bill goes on to say things like, you know, you, maybe the shade of white is different, right?
You could get more precision in the shade of white. You get more precision in, you know, the diameter of the sphere or whatever, the smoothness of it, right? These are all more precise versions of the things that make it accurate. And then, but there's, like you said, infinite ways that can happen, but the other part about the second question of quality, you know, that how many different ways can it meet requirements is that it's a, it's a [00:13:00] subjective question in the sense that like, if I were to ask, like as the supplier of the white beads, I would need to ask you the consumer of the red beads.
What do you think is a better white bead, right? The consumer there is the arbiter of the qual Arbiter of quality. So once you move out of that binary framing to where you could, say, statically measure, like, the colors of white or red in a vacuum, you move into an infinite space where the answers are only available when you begin to think in systems.
And that's where you connect the producer and the consumer. And then that gives
John Willis: you the system itself, right? Which he uses the metaphor of when you're building something at home. But, and I, and I, I want to come back to that. Cause there's a couple of things I thought
Adam Hawkins: about, but, but yeah, that one's great too, but,
John Willis: but in essence, what, what.
What that conversation led to, which is what Taguchi is all about, right? Which is [00:14:00] like that, that thing about the it really is about who receives or, and I, my point was it was ingress too, right? Look, and that was another, he got that one right away. Like, he didn't argue with me on that one, but like, but like, that's a system.
But a couple things I wanted to point out, and then I want to get into the whole Taguchi quote, and you know, how it confused the heck out of both of us, including him, and then it made more sense near the end of that that, that, you know, it made sense then before the two part, but for me and you, I think, until we got to the end.
But I thought, like, the one of the things about abstractions, which is interesting, I thought, like, as you were saying, like, there's sort of, like, We can't live with them and we can't live without them, you know, like they, they help us frame things. And I remember asking John Allspaw, when I was first getting into Kinev, and if people have heard of Kinev, and there's some interesting stuff there.
And I thought for sure, John being very heavily into John Allspaw, you know, very famous DevOps person. Into a lot of sort of resilience and adaptive capacity these days. I [00:15:00] remember asking him. Like what he thought about Kinevan and he flat out said, I don't like it. I'm like, really? Because that's a complexity model and it's a way for framing sensemaking, you know, and he said, yeah, you know, like, in short, I'm paraphrasing, I don't like abstractions.
They tend to sort of, they, they, they can block your ability to learn, right? Like, and again, you can live with them because they at least get a framing. But the other thing I was thinking about is about accuracy. This is really cool too. You know, I had, we ran the first DevOpsDays, me and Damon Edwards in in Tokyo years ago, and the guy who ran it on their side took us to this incredibly cool saki bar.
And he, he explained all this complexity of, you know, the levels of saki and I forgot all about it. And then I was on this Japan study trip last year and we went to a saki you know, basically not brewery, but saki bar maybe. No, it was it was like, like a [00:16:00] whiskey distillery, I guess, where they made sake.
And I got to sort of ask the, the, the the master again, I don't know if you'd call it brewer, but whoever the master sake was hey, you know, I've been told and he went through and what's interesting in sake is there's these levels like, of like, and, and like I finally got it to explain it to me a level where it understood, which is it's how much they shave the rice.
And it's like, I didn't know that, and that makes the quality, right? So that's a great sort of, like, there, that's sort of a I don't know if there's, there is a Toguchi somewhere in there, but like, but like sort of the white bead thing. But anyway, those are two points. So going back to the Toguchi thing, so you had made the observation that I did, and I think you made, you said it better than I could, is that the quote in the beginning of what Bill Bellows said, which was he read Toguchi's sort of quote about, like, society, Like, what the heck [00:17:00] does that mean?
And then, you know, when I'm listening to the podcast, I'm like, what the heck does that mean? And then we sort of somehow get to like, Oh, I get it.
Adam Hawkins: Yeah. You know, and the, just to for the listener, the quote here is that from Daci is that quality is the minimum of loss imparted to society by a product after it's shipped to the customer.
So like, if you start there and you're just like, okay, well what the hell is this even like society? What are we talking, like, what are we talking about here? The society, if I give, if you, if you take off society and just shorten it, right? Quality is the minimum loss imparted by a product after it's shipped to the customer.
The key thing being there is that it's chipped to the customer. So now you've created the connection between the producer and then the customer where now the customer becomes sort of the arbiter of the quality. Right. And then if you add back in society, you can [00:18:00] make society as broad or as narrow as you want to.
Right. When I'm playing with these kinds of ideas, I like to take them to the extremes to see if they. Just to play with them on the end of the distribution, right? So the way I was thinking about this is what would be the maximum loss imparted to a society by a product? And I thought of, okay, you know, there's these ideas about how we can, as a society, dispose of nuclear waste, right?
And one of those ideas is they shoot it off into space, right? Well, what if that process goes wrong and the product designed to ship it off into space actually turns around and falls down back to Earth, causing massive loss imparted to society? So you could say that that's a Pretty poor quality product, right?
But the people who are maybe making that rocket, they're in the lab They're doing God knows how many different tests how many experiments like tuning each individual piece, right? But then you put the whole thing together for whatever reason [00:19:00] Comes back down to out comes back down to earth and and boom right and the reason why I like that example it's because Maybe the customer purchased this thing, but the loss is imparted to society as a whole, people who were never even initially part of, like, the purchase or procurement of, procurement of that thing.
But that ripple, it can ripple out as wide as it possibly can, right? And if you adopt that definition of quality, you naturally start to think more about the interconnections and interdependence of the environment where their product is going to be consumed. And that just opens a door to like an infinite way to think about quality, which is, I think was, was one of Bill's key points and one of the key insights.
Well,
John Willis: you know, it's funny, you know, and, and, you know, of course, you know, I, I do normally purposely pitch my book every time, but like, [00:20:00] you know, the I mean, it, you know, as you're going through that, and this is why I love doing these kinds of podcasts because like, it makes me think about things that I've thought about and then, oh my God, there's another way to think about it, which is the Knight Capital story.
It doesn't have to be nuclear waste, right? It really could be is, you know, simple, but, like, you know, the, what was the societal impact there? I mean, like, that company went out of business in 24 hours. Right? And, and, you know, and I tried in my book to, you know, I'd like, in the back of my head, I had John Ospar yelling at me, counterfactual, counterfactual, you know, but, but so I, I was really in fact, my coauthor was like, who is this John Ospar guy?
Why do we have to worry, you know, but I wanted to be really clear that this was my hypothesis of how Deming would have thought about a systems thinking approach to all the things that led it. to that sort of almost Air France 447 event, right, where, like, everything went wrong or, you know, or, you know, or what Diane Vaughan would call a normalization of deviance, [00:21:00] right?
And, but it was, the point I was trying to make was, There was no single part, you know, for just a real quick recap, there was a program called power peg that was designed to stress systems that was never supposed to run in production. It literally bought stocks high and sold them low. It was lingering on an old system.
They put it, they, they were rushed to put a new system in because a change in the SEC and allowed people to run an API in, in, in the in locate, you know, in the sort of the black box trading stuff. And, you know, the CEO first didn't want to do it. Then he said, let's do it. Then he only had a month to do it.
I mean, all the things that you would find in the Challenger accident. And the, the, the thing I was trying to drive, which was every component of that, like was, was, you know, [00:22:00] in, you know, is what I think Bellows would say, maybe Taguchi would say it was in precision. I mean, PowerPeg was, was a program that was perfected and literally did great things for testing and stressing.
Their ability to high frequency trading, but, but should have never gotten turned on production because again, counterfactual here, the sysadmin probably left a comma out in a list of eight nodes that was supposed to be updated. And that code was still running on that one server that maybe it was, you know, server one comma, server two, comma, comma, server three, you know, right.
And You know, and the next thing you know, it gets, you know, a buy high sell low, you know, at high scale frequency in the exchange is basically doing this. Well, there's another example of, you know societal loss, right? And, and, and not, but you know, it's funny because it always goes back to systems thinking, right?
I mean,
you
Adam Hawkins: know, and you [00:23:00] don't even have to go back. in the past actually to find examples of this kind of thing. I mean, if you're familiar with the the story, it's evolving in the UK with their like their post office and the IT stuff
John Willis: with Fujitsu. You know, I've just seen some postings. It's in my queue to read.
But yeah, why don't you?
Adam Hawkins: It's just one of those examples where I don't know enough about it, but just sort of pulling up one of the news stories here to Talk about it, but okay, this is just read a little summary here. It says the initial fault was with horizon, a digital accounting system installed by the installed by Fujitsu, which wrongly said post office branches had cash shortfalls.
So it caused an accounting error, right? Fast forward a little bit. You know, okay. If you're reading your account balance is wrong. Things are going to go wrong, right? And then, because overall there are people were like. Let's see, 3, 500 branch owner operators were wrongly accused, more than 900 were [00:24:00] prosecuted, and many of these people were jailed.
Wow. Some suffered significant ill health. You know, so, okay, you have a software error causing accounting error, which then results in people being prosecuted and sent to jail simply because of the screen they're looking at is telling them the wrong information and they're acting on that information, right?
So, you know, as a programmer, you probably never thought about that happening, right? But if you follow that definitional quality all the way through, yeah, you cause some loss in part to society.
John Willis: Yeah, no, it's I think I think we could you know, let you know, like now that we're so then we don't need to go and do you know a normal accidents, but you know you know podcast here But like I think we could literally just go through, you know, a lot of those stories which again there, you know They're just the society.
I mean the question comes up a lot. It's came up on a podcast I did recently and you know the software kill people Yeah, you know, I mean, in [00:25:00] fact, I just read a book, I've been, you know, I've been deeply in this AI stuff and I, I've been reading some books about AI bias, and so that the bias is. Basically, a lot of these sort of models that do, you know facial recognition or, or people recognition are heavily weighted towards white males.
And so now cars have a bias. You know, when it's trying to decide, you know, whether it should, if it knows it's gonna crash and it's gonna hit like, you know, one sort of race of people versus, I mean, it might not wreck you. I mean, it's, it's you know, I'll post, I don't remember the exact name of the book because but like I, I've been reading a lot of AI books lately, but I will post it in the show notes because it's just fascinating.
So like. I mean, me and you could talk about just things like the postal stuff, the night capital and go through a list, but it's actually going to get a lot worse.
Adam Hawkins: Oh, well it'll [00:26:00] probably get worse before it gets better. Right. Yeah. Yeah.
John Willis: Well, I mean, again, I'm not a doomsayer. I'm all in. Oh yeah. But, but yeah,
Adam Hawkins: I agree with that.
Yeah. Like as software becomes more integrated into daily, like into systems that it hasn't been before. I think cars, Personally are a wonderful example of this. And as much as I love technology, I fall onto the side of like, give me the simplest car possible. I'm not interested in big touchscreens and self driving cars and complicated things like this is no, I like my simple mechanical systems was just a little bit of electronic fuel injection.
Like that's about as much as I'm willing to willing to take, because I know that car will last for, you know, 60 years and it's not going to require 10, 000 in maintenance because some touchscreen broken or. You know, it got bricked in a software update. I'm not wanting to adopt that kind of failure, those kind of failure modes, [00:27:00] but they are introducing, you know, introduction of more advanced technology in vehicles is changing the way.
That, you know, we interact with them, the capabilities they provide to us, and thus the different ways they integrate in more complex systems like IE society, the roads, you know, to your point about the maybe lack of variation in the data sets used to train these models, you know, the assumptions that they make, what kind of, what that may result in.
You know, or even something as simple of, Hey you know, this even happened with Rivian recently, right? Did you see how they bricked their, one of their updates where it was, they shipped an update and people can go look at this, but it was basically signed with the wrong cert. So when it was received by the car, the car would start the update and then would fail.
This like failed [00:28:00] the certificate verification and then brick it. It was the only way to get the car even. Moving again was to take the car to a dealer and have them do whatever. So, like, those kind of failure modes and breakages are, you know, hopefully they never happen. Well, I think you and I both know, you know, never is just a fantasy.
Like, they were going to happen. It's a question of how statistically significant they are and how frequently they happen. Because, but, it's a Requires people to have a more, like a deeper commitment to quality, broadly speaking, to succeed in an increasingly complex world. Well,
John Willis: that's something that, you know, I was thinking about this the other day.
I met a woman and again, I'm just terrible at names, but she's she's in Europe and Spain and she's a quality person and she wants to do a podcast. I'm like, Oh man, you need to meet a couple of people I've been talking to, you know, so I'm going to introduce you to her. And again, I apologize right now to anybody that [00:29:00] I don't remember her name, but I'll put it in the show notes.
But I, you know, the thing is she reminded me of, and she sort of said a couple of things, we haven't done the podcast yet, but. You're going to hear me a lot on podcasts this year. I want to promote my book, right? So but you know, is I, I wrote a blog article I know sometime last year it was from a famous football coach that said, you know, the standard is the standard.
And the way I started thinking about that is how come like banks and insurance companies and, and like don't have like a chief quality officer. Right? And I know that the concept of quality has been. Is, you know, like, it's sort of like one of those things like it sort of started off on the wrong foot, right?
Quality is these people that stop everything and sort of like security and, and then sort of DevOps helped us sort of tame that concept a little bit. And then people like you are very enlightened. You know, Mike Harris is another [00:30:00] one who does a lot of sort of, you know, Deming and DevOps quality and, and, but, but so I think that the thought of like saying to somebody, Hey, why don't you have a chief quality officer? Like, you kidding me? We got rid of that concept years ago, but a quality, you know, like, cause what I did is when I wrote that article, I looked at like the top, you know, 20 banks, none of them had, and then I looked at like, and the reason I had this, I did some consulting work for a farmer, a company that does, does the cold chain supply chain for the Pfizer vaccines.
And they, and I got to go in there and spend a week with them and do a bunch of stuff. And I got to meet their chief quality officer. And she was very steeped in Deming and and and like she had her fingers in everything. Why? Because people die, right? Right. But then I started thinking like, like, did we what part of lean where was this sort of like point where everybody raised their hand and said, Hey, [00:31:00] let's not include the topic of quality and lean anymore.
It doesn't make sense, you know, I mean, I know I'm over, but like, to me, I wonder a lot and I'd love to hear your thoughts about. In the right way, shouldn't every corporation have somebody who's just sort of going around and making sure the quality is top of mind? They might
Adam Hawkins: frame it differently too, in a sense, like, not necessarily quality, but who is the customer advocate, you know, without necessarily thinking about it.
In terms of quality, but the difference though, between just being a customer advocate and being in being, you know, say a quality advocate, it's for only a customer advocate. You might just think about how can I represent the needs of the customer, like collect the feedback from the customer, bring it back to say like my people, right?
But if you're the quality person, you're thinking, okay, how can I take that feedback and how can I integrate it into everything? Then how can I check the result on the other [00:32:00] end? That is actually then delivering on these things that I thought to the customers, right? You're necessarily, you're purposely creating that feedback loop.
And then to your other question of, you know, how do we like in Lean miss some of this stuff? I think it actually, having read a lot of the books about it, it's not really there. Now that I think about it, and that like, if you read, you know, like the I've got a bunch of these books on my bookshelf, you know, the birth of Lean.
Taichi Ono's books, like the, the Toyota Way, some of these things, like, it's there implicitly, but not explicit. A lot of the stuff is like, okay, this is how we do Kanban, we have Jidoka, we stop the line, we do these things, we're, you know, just in time, but then You read like was it quality the Japanese way?
I can't remember by who wrote that book, but that's where there's like those two pillars, right? You have [00:33:00] total quality management and the Toyota production system. Like once you combine those two things, like those two pillars, then you actually get the whole. You see the whole picture, but we don't, we just sort of, for some reason, focus more on one pillar than
John Willis: the other.
In effect, one of the things I, when I was doing a little bit, it's not a heavy research, but I looked at banks and then I looked at manufacturing and a lot of the large manufacturers have cheap quality offices, just like a lot of farmer do, right? I think I thought when you were talking about like a customer advocate.
The danger in the customer, and I think me and Bill kind of tried to dissect this too, right? So in, in an organization, you know, that he, we talked, he talked about the idea that that if you're building stuff at home, you're going to build some, a new deck or something like that. You are, you are on every step of the way, the supplier and consumer, right?
Mm-Hmm, . And we can come back to that. 'cause I think that was a brilliant, and then so the, there's some [00:34:00] implicit exchange about like, I'm gonna do this right, because like, I'm the one that's getting it. But going back to the customer advocate thing, I think what that sometimes misses is the, all the steps.
In between right like you know you're writing code it well i write code you're testing it somebody else is trying to implement it into a platform somebody's managing the platform somebody's look and you know and maybe the customer advocate. Is not looking at the sort of the breaking down the system where.
You know, make me king of the world. I would say, let's try to define a role as a chief quality officer for a bank for a software company where their focus is not. I mean, certainly the customer, the customer success, customer advocacy, but they're looking at it as not left and right side between some dotted line between everything the company builds and everything the customer gets.
Does that make sense? [00:35:00]
Adam Hawkins: Yeah. Well, it does to me because the. In order for that to, in order for that role to succeed, you have to think in value streams. And at that point, you don't really become a quality manager, you become sort of the value stream manager. Because to your example of, you know, you're building some software, it's going to be shipped out, it's going to be tested.
It's going to go to some, some person, you're going to have somebody on the other end, like a system admin trying to get this thing. You need to see that whole thing as a value stream. And how it delivers, like, what is the aim of that value stream at each step of the way? What's the final outcome? Like, what's the whole, like, aggregate aim of that thing?
If you can think of that, you can work backwards all the way through, right? But that'll be in this, like,
John Willis: It's like, like I'm being an annoying naysayer, but, but there again, right? Like that goes back to what part of lean left out quality because, you know, like the whole value stream map and value stream and like all the stuff that's going on there is like brilliant.
Right. And I, you know, Mike Roth, I wrote the [00:36:00] original book on the concept in America. Like it was already been going, you know, learning to see as E and you know, and I follow a lot of Steve P area and again, a friend of mine, but like I'm probably pronouncing is I call him Steve B. I just call
Adam Hawkins: him Steve.
There you go.
John Willis: Yeah. Yeah. Like, let's, you know, you get in trouble. You get a couple of Steve friends, but but, but let's be, let's be sort of transparent here. Most of the conversation about value stream is really not about quality. It's about flow and waste. Oh, yeah, for sure. Right. But again, where like, if we say, well, okay, we got, and I'm not saying you're saying this, but we got a customer advocate, then we've got quality.
No, not really. If we got a value stream, if we're sort of focusing on sort of somebody who, who manages the value stream, when we got quality, not based on all the literature and the knowledge and the books that are talking about it, because they're more about important concepts, flow and waste, right.
Yeah. Yeah. But I don't know that they're thinking about, they're certainly not applying [00:37:00] to Taguchi loss function. No, that's
Adam Hawkins: for, that's for sure. But I think if you add the one other metric, which is key to adopting value stream like management, which is percent CA, you know, the percent complete and accurate, then you get one step closer.
Okay, because then, right, because you don't get the precision, but at least you get the accuracy. Right. So like the, you know, the percent CA, you know, percent complete and accurate is like, what's the percentage of work completed by one step that is immediately usable by the next step, right? Without rework or anything like that.
And this is why
John Willis: the work you do is incredibly fantastic. But how many people are using percent CA with statistical process control and variation?
Adam Hawkins: Well, first you got to get him to even start using percent CA, like for many people, they don't even see the value stream first. Right. So it's like part of the, part of the challenge too.
And I think this is one of the things you wanted to talk to Bill with. And I was really hoping you, you would get to it as well. How did these actually come into software? Because, you know, you and I have had [00:38:00] these conversations before where, you know, okay, with DevOps, we have at least the concept of value streams.
But do we actually do value stream mapping? I dunno, there's probably organizations to do, but it's not as widely adopted as say something like continuous delivery, which is sort of an implicit thing now that if you're doing software, like you're just doing this, and if you aren't. What are you doing?
John Willis: I want to have that conversation with him and I was so tempted to ask that in that practice But I realized let me just get my head around to Taguchi and we're gonna do the same thing with a cough, you know And then and then I think the next set of conversations, but one of the things I'll just tell you something I'm really thinking about.
I mean, I want to do it. I just got to get it all coordinated. You know, my, my problem is I'm like the dog in the squirrel, like, you know, like they can be easily distracted, but you know, one of my thoughts is, you know, I wanted to do a launch party on the Deming profound book, but I [00:39:00] wanted to do it in a unique way.
So you know, I've talked to my friend, Alan Schimmel of tech strong dev ops. com. He's going to help me do this, you know, in a way that doesn't. Break my bank. But maybe instead of just doing a party, do a virtual conference day. And by the way, you will be invited to be a speaker. But, but here, but I thought maybe if I get Bill Bell to be on a panel with like you, Mike Harris, a couple of people that were in that other book club, cause we, the, the Deming profound book club that we ran last year.
I mean, that question actually came up this morning, where, you know, I popped my head in. I hadn't been going lately about, man, I wish you would have talked about this. And I'm thinking, you know what? Instead of me asking the question, let me see if I can get a panel together and have, you know, have you and Mike Harris and a couple other people just like draw them into it.
And I'll probably do it in a podcast form too, but, but, you know, I'm going to try to put together something in March where it will be a virtual. You know, sort of book launch party, [00:40:00] but instead of just me, you know, everybody pat me on the back It'll actually be a set of presentations from people Who have you know who are like old school deming people but people who now have like taken the torch But anyway, yes, I was I was dying to ask that question during the podcast, but but I knew I, I just had to get through that whole
Adam Hawkins: struggle, the foundation, right?
Like let's build that shared mental model together. Okay. And now how do we take this and apply it to this context? Right. And, you know, we've, you and I have had the kind of sitting around sort of the edge of that conversation, which is there's more natural entry points into these, like into applying these ideas in different parts of software than others.
And I think if you work in SRE, you're already closer to sort of the empirical part of. Software than like on the like application developer side, because your whole work is centered around metrics like it's empirical, right? And from there, you can build [00:41:00] you take that aim of the system. Okay, how do I measure that?
And then you create your SLI, you can set SLOs. And if you care about it enough, you can go down to SLAs. But like, if you just start with SLOs, you're already thinking in terms you're already looking at statistics, you're already thinking in terms of accuracy. But then if you're looking at charts, you're looking at what's going on, well, then you're automatically introduced, if you're not explicitly aware of it, to special cause variation and common cause variation.
Because you know, like, hey, if I'm going to be on call, I want to be on call and paged because of a special cause has changed the behavior of the system in some undesirable way. Versus I don't want to be paged routine behavior of a system, i. e. common cause variation. So you're implicitly making these Sort of distinction without maybe the language to understand them, right?
And then because of typically where these people sit, they sit at the sort of the platform end of the value stream, but [00:42:00] for them to be effective in that role, they have to see the left. Of the value stream and the right of the value stream. So you're like, you may not implicitly or may not explicitly think this way, but you all have to be aware of the value stream to be successful in that, you know, because you're a.
You know, a representative of the quality of the product shipped to the customer, but also bringing back the feedback of what's going on, say, in production and how. Getting code into production impacts the down, like the earlier work in the pipeline on the engineering side, right? No,
John Willis: I think, you know, a quote I made, you know, and I I, I read it somewhere and I could never find who, where, who came and so I just said it was a botched loop quote, but, you know, misunderstanding variation is the root of all evil, right?
Which is like that getting called at three in the morning, right? Like, is it something that like, I have to wake up and come to the. You know, the building or drop everything, or is it sort of you know, that 90, you know, [00:43:00] the Deming would what say 97 percent and 3%, right? Like if it's, if it's basically the 97%, like, okay, you know, deal with it now.
And we'll think about like, do we change management? Like all the things Deming would say, like, so that's something we can do tomorrow. You know, the, the other thing you made me think about, which is. I, you know, I often say, you know, which is, you know, that Deming would hate OKRs. Right. And like, and, you know, and I've, we don't have to get into deep here, but I think, you know, he, he wasn't a big, big fan of results.
He was wanting to know, you know, the, the method, what is the method that you use? But, but then I say, But Deming on, on SRE, particularly SLOs, and people will finish my sentences, Oh, he'd hate that too. I'm like, no, no, no, no, no. He would love SLOs and SLIs because they're decoupled from the, if done right, they're decoupled from the human.
Right. The problem [00:44:00] with OKRs, I mean, again, in theory, the way they were supposed to work, but the way I've always seen work, they are part of exactly what Deming hated about MBOs and, and, you know, that, which was that, you know, humans will gamify it like that. You give them, tell them they have to have these objectives and results.
They're either going to do them. They're not going to do them, or they're going to, like, fake them, right? SLOs and SLIs, again, unless you've implemented them terribly wrong, there typically is not a human involved, right? And so, yeah. Deming would be like, yes, you guys figured it out, this is awesome.
Adam Hawkins: Well, we can even bring this back to Deguchi because if you think about SLOs, there's, you know, just to like go back to them just sort of like as a concept, right?
You know, Google, you know, popularize them or at least in some of the literature, right? Well, you can't have 100 percent of an S your SLI cannot be 100%, right? [00:45:00] Like you, you. It's not going to happen, right? The internet things happen. There's power outage. There's whatever things are going to break, right? So you adopt the framing that things are going to break.
Then the question is, how much breakage are we willing to accept? But the people who make that decision, right, are thinking in terms of, what is the minimum loss of quality I'm willing to impart in this system to somebody else, right? And that decision is a question of value. And say, you know, Google makes this point, like, is it worth the money to get an extra nine in your, in your nines?
or not, right? But maybe if you're Amazon, right? And you want like, six nines on checkout because that extra nine is worth a billion dollars a year in revenue. But if you're just, you know, Acme small company, Maybe two nines is fine enough for how many people are actually using your product, right? Is that [00:46:00] natural kind of entry point into thinking in terms of the how much loss are you willing to accept?
But still provide that still keeps the system valuable to the end
John Willis: user I love Mark Schwartz's the art of business value, right? Or is that yeah, so where he makes that point It's not about customer value. It's about business value, right? Like in other words Web is all the customers always right or, you know, you can, you know, he runs to some examples, but I think of like Nordstrom, right?
Nordstrom, like they, there's some customers that are not right for them. You know what I mean? They want the one that's going to come in and, you know, they'll, they'll you know, they'll Sherpa you around, you know, when my, my son graduated from So, from high school, like he did really well, you know, he got into a great college.
So He said what do you want? He says I want to go to Nordstrom and I want to do one of those gigs, right? you know where like you literally spend a half a day with some expert and they and Like they don't want this the person that's going to go to a bargain [00:47:00] store and find You know, and we tell story the story.
I mean, I'll I remember we were sitting around at a DevOps Enterprise Summit, like having drinks after the sessions and somebody worked from one of the large cable vision providers and somebody else said, you know, my, you guys stink in whatever city, you know, like, you know, let's say St. Louis, it wasn't St.
Louis, but I'm not going to say what the company name was either. But, and he said, Oh, you're in St. Louis. Yeah. Yeah. We don't care about that region. Yeah, like we'd like we don't put any emphasis in Eric's. That's not a market. We really like we serve you
Adam Hawkins: but like, but like, we don't care, but, but I mean, that
John Willis: is the sort of the that's a variant of the Google, you know, how many nines, right?
Like, not all applications are equal, right? Some don't like, like, like a manager comes in. You know, this is the problem with like Western thinking and most of the sort of non innovative companies. All of our businesses should be Four nines. Well, [00:48:00] by the way, four nines is like incredibly hard to do Like, you know, it's why amazon's like amazon's web, you know, ec2 is what three nines, I think right?
Right, you know you know because like the amount of cost it would take them to get to even four nines for you you know, anyway, so, but, but yeah, no, I think Google, the SRE book does, and I tried to capture this in my Deming book, right? Like, what is the cost of an extra nine? Does it outweigh? But I think there's two, there's one, like, is an extra nine equal the amount of money you're going to make or loss?
And then, but the other is not all applications are equal. And if this goes back to black and white thinking, it's not quite the, the sort of doesn't meet specification or not, but it is a lot of leaders. In sort of Western sort of command and control organizations are, you know, like zero defects, all of our applications, there'd be four or five nines, right?
And like, and everybody's like, okay, I guess, you [00:49:00] know, the guy's crazy, but I guess we got to try.
Adam Hawkins: Well, that kind of black and white thinking also makes me think about, say, earlier on in my career, working as a software engineer and thinking, okay, what, like, what, what's my goal? Like, is the goal for me to have.
To never have a bug, like zero defects, like these type of things and it well then everything needs to be like this because there's only one way to build software correctly and, you know, you can sort of just take that, right? And then over time, the the world chipped away at that sort of understanding of things for one reason or the other.
And like, once you can, like, once I was able to sort of break out of that, you know, all applications are the same, they all need this is like, no, no, no, actually, if you move away from the black and white and adopt shades of gray, you're able to actually succeed in more different domains and more different contexts, because black like Black [00:50:00] in one case might be white in another case, right?
But you have to be able to map those things and make those trade offs and be willing to see that you know, quality varies is contextual, right? And you just don't have to hold the same commitments all the time, right? Like how many times have you been in an organization where the stuff they ship to their customers is great But their internal tools are absolute
John Willis: garbage.
Yeah, yeah, oh man, yeah, no, I mean that's I won't name the you know, if you've read over the last 10 years some ex employees of some large cloud providers, who are incredibly commercially successful, but you just, the people who have left and described the sort of spaghetti nature of the infrastructure yeah, no, I mean, like, that can happen too, and like, that's I, I think that, like, maybe if we sort of wrap this up on the so that my poor son doesn't have to turn this into a two parter and that I make the mistake of, [00:51:00] of not telling people it's a two parter they would the I think that quality is infinite, which I think you, in your blog article that, in your assessment of the two part Bill Bellows thing, I think that was to me, excuse the pun, profound.
Adam Hawkins: Well, thank you again. I think that's the key insight. And that's like why I wrote it and why I was, I knew that I was listening to you to talk with each other, that there was going to be something here at, at the end that I would learn. And I'm really happy that you had the conversation. And, you know, there's still one other point that we won't talk about now, but you know, Bill's example of the cutting the wood.
I was like, Oh my God. Where does he come up? Like, where does he get these things? Gotta be a two pointer,
John Willis: two pointer. Yeah, that was the one I, I kind of replied to you. That was you know, like I, I, you know, I, I often think about the, the, the, the, the, the carpenter's credo, right? Like metric twice, cut [00:52:00] once.
And you were like, no, no, no. That's not, you know, that's not even precision or accuracy, but like, but I guess I, I kind of wish I could go back on time and say, no, but what I trying to say. Because in one of Deming's book, he even talks about, like, you want to order it's not out of the crisis and it's not it's one of his older books about like, you know, about variation and all, a very technical book, but he talks about like ordering a very expensive glass for a coffee table.
Mm hmm. It gives the example of how you wouldn't just measure it once. You would measure it, you know, if you were sort of Deming or you were some like freak of statistical process control, you'd measure it like 10 times. Right. For the special versus common cause. And then once you kind of realized that there was sort of a process that you could take a number.
That you felt comfortable with that wasn't going to be, like, you could go nuts. I mean, this is the whole point of the story about pragmatism, right? Which is, the guy who [00:53:00] came up with pragmatism was obsessed with pendulums. Right and the accuracy of a pendulum and and like there was a point that he figured out is you could do this forever It's like the nines, right?
But there's a point like the whole point of pragmatism was that there was a pragmatic point where so back to Deming's like That was sort of based on a pragmatic notion that if I measured it ten times And I figured out like two of them were crazy. Maybe I was drunk. Maybe I was lopsided and then throw those out.
Now I can look at what is the sort of the, the upper and lower control limit of those measurements. And maybe they're really small and whatever the mean is, is okay. Right. And, and so what I was trying to sort of describe is that to me is the beautiful connection between the Carpenter's Credo of, you know, measured twice, cut once, which really wasn't measured twice, cut once means if I measured once and it was here and I measured twice and it was [00:54:00] way out there, I'd probably measure at least one or two more times.
Right. And like, that's how I wish if I go back to time, I would have said that is why I thought it was more about precision.
Adam Hawkins: Well, at that point it becomes measure at least twice, cut at most once. That's right. There you
John Willis: go. There you go. Right. Well, you only get to cut once, right? That's the, that is the one thing, but measure at least, at least twice.
But you know, yeah. And but yeah, that was, that was a fun one. And, and, and, you know, at first he was sort of like, I'd have to go back, but he was like, no, no, no, that's, that's just you know, and I like, no, I was thinking of it in a sort of, you know, like the smallest sort of particle, like how, you know, like if you could actually see the difference between the two lines.
Okay. Yeah,
Adam Hawkins: well, I think that's another, so, okay, we won't talk about this for too long, but the, the whole way that Bill describes this is, okay, you're a carpenter, right? Are you going to me are you going to draw one line horizontally across a [00:55:00] piece of wood or are you going to draw two lines, like, some, like, inch apart or whatever?
Like, that's setting the control limits, right? And I think if imagine, like, you go into somebody's workshop and you see them working like that, what you would think about that person, like, in the way they were working, how weird, how, like, kind of alien that would be, without so much Like, front loading of their mental model.
You just be like, what is this person doing? And I feel like, when it comes to some of these, like, ideas that you and I are talking about, And we talk about value streams and then percent CA and you get like waste and all this. There is so much brain loading you have to do to get to the point where you can see why this way of working would even be like sensible in some context or not.
Right. It's just an interesting thing I continually think about as I try to bring more people kind of to my side, you know. [00:56:00] It is,
John Willis: you know, I think it's part of the curse and the blessing of becoming a Deming fan, right? Because, like, you get to these plateaus and then, okay, and then there's another plateau of learning and But, but I think that one of the sort of things that make it difficult, you're right, I mean, just I mean, I, I remember in the book club I, we had last year, it was an informal book club, and, and And, you know, I, you know, like I'm at a place where I sort of understand statistical process control, common cause versus, and I, you know, and I'm constantly sort of like calibrating the way I think about it from other people who know it much better than me.
And again, that's why it's awesome to talk to Bill. But like trying to explain that to somebody who's never heard it before, there's such a disconnect from the way you are and they are, or somebody who's even further along than I am, that you can watch the struggle. Like somebody, you know, like, well John, I just don't, because you know what?
Part of the problem too is we've been [00:57:00] so trained to think very much like you know, like our sort of way we think about monitoring and telemetry is Mm-Hmm. black and white. You know? Is it, you know, like I always say that there's no such thing as the number 42, right? Checkers guide universe. Like, it's like, it's, you know.
41. 99999 or everything is floating point, right? It's yeah. You know, like, you know, there is no exact exact number when you start thinking about physics. And and so, but we are so trained to say, you know, if it's, you know, if it's above 80, it's good. Well, really? I mean, like, have we looked at all, you know, you know, like we would go into a three parter, which I know what to do, but let's, let's table it.
This is awesome. I mean, I, I think you just, you know, I like to do like, just tell people how to find you. I'll put it in the show notes, but
Adam Hawkins: yeah. So you can find me on my podcast, small batches, that small batches out of them [00:58:00] has links to everything, or you can also find the sub stack at software kaizen.
substack. com.
John Willis: Brilliant. It's always a pleasure, my friend, honestly.
Adam Hawkins: Pleasure's all mine, John. I love talking to you. And we could go on forever, but
John Willis: I think we want today. We have day jobs. We have people who have day jobs, right? So, alright, good enough. Alright.