S4 E25 - Dr. David Woods - Resilience and Complexity: Part Two
In this second installment of my conversation with Dr. David Woods, we continue our exploration of resilience engineering and complexity science, focusing on practical applications and actionable strategies. Building on the foundational concepts from part one, Dr. Woods offers deeper insights into how organizations can thrive in unpredictable environments by embracing resilience as a core competency.
We dive into the nuts and bolts of designing systems that can adapt and recover, emphasizing the importance of fostering collaboration, continuous learning, and feedback loops. Dr. Woods connects these practices to W. Edwards Deming’s teachings, particularly the interplay between profound knowledge and operational flexibility. Our conversation also underscores the significance of learning from near misses and small failures, treating them as opportunities to strengthen systems rather than vulnerabilities to hide.
Key highlights include:
The Adaptive Cycle: Dr. Woods introduces a powerful framework for understanding how systems evolve and adapt over time, offering lessons for IT, healthcare, and manufacturing.
Learning from Disruption: Examples of organizations that turned crises into growth opportunities by leveraging resilience principles.
Operationalizing Resilience: How leaders can embed resilience thinking into daily operations through deliberate design and cultural shifts, echoing Deming’s focus on systems thinking and constancy of purpose.
This episode serves as a practical guide for anyone seeking to bridge theoretical concepts with real-world applications. Dr. Woods leaves us with actionable takeaways on how to lead and thrive in an era of constant change, making this a must-listen for leaders and practitioners alike.
Transcript:
John Willis: So, so I want to, so like, so I mean, an example of that, you know, we, I think this, everything you just talked about, like, it's sort of chaos engineering, right? Like, I think that's a, you know, what we've tried to create, like, we, you know, the, like, chaos engineering puts us on the boundary regularly. Otherwise feeling everything we've got.
But, but again, one, do you agree with that? But then two, like, how do we tie this back [00:47:00] to like, what are we missing right now in all this, this gold rush? And how do we sort of get that resilience or that more familiarity with the edge or the boundary? First
Dr. David Woods: one's easier to answer. The second one is pretty expansive.
Okay. First one is straightforward. This is chaos. Engineering is a great example of bottom up innovations to deal with the reality. Right of this natural world that we live in where growth and complexification are normal processes are successes produce growth and inevitably a partner in growth is complexification.
That's what we've been talking about. Right. And that complexification induces penalties, which this is the capacities that fuel growth. don't automatically include the capability to mitigate the complexity penalties that go with growth, the processes of complexification that we work at new [00:48:00] scales and new relationships.
We have hidden interdependencies. You can't map all the interdependencies. I have colleagues who keep trying to map all the interdependencies and have better and better techniques for mapping interdependencies and it is good to map it interdependencies. But it turns out the new science says you can't get them all
John Willis: no matter
Dr. David Woods: what you do.
You can't get them all. And just like you can't eliminate the sources of surprise and you can't have an almost perfect model of the world. It turns out in this universe, those things are not going to happen. No matter how much resource, how smart you are, whatever the latest technology revolution is, right?
Models will be limited. The world will continue to change. So, and you'll miss interdependencies. So chaos engineering is a bottom up innovation, right? Really cool, which says, Hey, I've got how [00:49:00] do I find hidden interdependencies that can burn me when I have a system undergoing growth in order to provide valued services.
If you're not provide, if your services aren't that valuable, you're not growing, you're not an effective organization. If you're an effective organization, you're growing because you provide more value, which means complexity is going up. So hidden interdependencies can undermine your delivery of those valued services.
So outages, right? The simple empirical observation, right? In the in our snap food catchers work was, which everybody knew, and we just. You know, set it out loud, right? Right. What's the most common thing during an incident that is effectively handled or in the after math, when you analyze what happened is I didn't know my system worked that
John Willis: way.
Dr. David Woods: It isn't how it failed. It's I didn't know it worked that way. I didn't know it functioned that way. I didn't
John Willis: do that. Right. [00:50:00] Yeah. Yeah.
Dr. David Woods: And these things could interact in a way that undermine the delivery of these services under these conditions. And so bottom up, you're understanding patterns in your world, right?
And you're building more ways to deal with it. Now, the problem is other layers that you have fueled because of this value. They don't understand any of this. They still think the system should just work. They just think I should be able to buy more reliability and I should, every year I should buy it cheaper.
And why don't you, why do any incidents happen? And why does any outage occur? Because there must be a cause. Right as one CEO who happened by serendipity to sit in my office one day who happened to run an airline who happens all accidental that this happened. It's not because I have any great connections who happened to have two public major I.
T infrastructure failures, which made the airline [00:51:00] unable to operate and cost them hundreds of millions of dollars. And I have him and so I very gently am asking him about these. He goes to me. I know what happened the first time. It was Bob.
John Willis: Yeah. Oh, my God.
Dr. David Woods: It was Bob. Oh, yeah. So, this false causality of systems.
This idea that there's a fault because the system wouldn't work at scale. Growth couldn't be fueled if you weren't fault tolerant. And in fact, most of the problems that occur in critical digital infrastructure, there isn't a fault, not in the sense of something that wasn't designed to handle in the first place and handles those break those, those problems regularly.
Cause it's designed to work. Think the internet, right? The, we have built these to have a adaptive capacity.
John Willis: Yeah. Yeah. No, I, I agree. I think all [00:52:00] that. I mean, I, I, I remember reading the congressional report on the Equifax, right. And it, it listed it to me. It was an Air France 4, 4, 7 of all the things that could go wrong.
And then, you know, at the end it says, but our conclusion is somebody didn't patch the system, you know, , you know, but right. But my point about chaos engineering, I don't want to over rotate it because I do want to get back to the AI thing is, I mean, whether it's bottom up or, or whatever, but like, the point was, is that sort of a way to think about creating adaptive capacity in the way that sort of like, the hospitals or emergency rooms have it like, yeah,
Dr. David Woods: in particular, it's focused on discovering hidden interdependencies.
John Willis: Okay.
Dr. David Woods: And, and, and And that's why they're their biggest finding is I don't actually have to do a chaos experiment. I don't have to inject the fault, just planning to inject a fault that would challenge my system usually reveals ways that the, you [00:53:00] know, I essentially a walkthrough of the planned, right?
Failure. I want to inject reveals that the system works differently than you thought. Now, notice what that means is. You can't have a model that's completely accurate, right? Yeah. System has got too big. Everyone, everyone has a partial, the, the, the model we have of the systems we run and depend on our models are partial and incomplete.
They're partial and incomplete. They have to be, this is not optional. You don't have to feel you failed because you're, it's partial and incomplete. It will be. Resilience and adaptability says you have mechanisms to update. You have mechanisms to revise. You have mechanisms to reframe. And that's what Norbert Wiener was getting at [00:54:00] 75 years ago when he said, be careful.
Our machines have models. They are now so capable, they operate as if they have a model of the world. And because of that model, they have these powerful capabilities. We as human stakeholders can take advantage of,
John Willis: but
Dr. David Woods: they are literal minded. They can't tell if the ma their model of the world is the world they're really in.
And so they can do the right thing according to their model. When they're in a different world, people can be literal minded. When I did the initial studies in the eighties. Of behavior in these different kinds of anomalous challenging situations and my colleagues who started doing that around the world, we started looking at how humans could get literal minded, they could get stuck.
Right. Oh, this is our original studies didn't show people got it wrong. [00:55:00] When an anomaly happened in a dynamic system. What we found was their initial hypotheses were plausible and reasonable. Right. For the evidence available at that time, given the priorities of that particular operational world, what happened is they got stuck in their initial plausible interpretation while the world changed or more evidence came in because false combined, you know, problems, disturbances.
Cascaded combined and moved on and they got stuck in one view of the world while the world kept changing. So they started taking actions that fit their model of the world, but it wasn't the world they ran anymore. So that literal minded and what that leads us to is this issue of, can you revise and the ultimate form of revision is reframing and that's the third challenge.
On adaptive systems. The first one is, can you keep pace? The second one is, can you synchronize across the [00:56:00] different roles and layers because no one unit by itself can can handle all the challenges that can arise. So you can't perfect one component or subsystem or agent. You need to coordinate. Right. And the third is you need to revise and reframe.
Well, guess who can reframe not AI, right? Not AI gold rush one. Not AI Gold Rush 2. You know who can reframe? People. You, how hard is it for people to reframe? Really hard.
John Willis: Really hard.
Dr. David Woods: Yeah. Yeah. I
John Willis: mean, people
Dr. David Woods: don't always reframe. People don't often don't reframe. So the people who are good at reframing in one setting may not reframe in other settings.
John Willis: Kahneman wrote a couple of books about this, right? So
Dr. David Woods: it's not really in Kahneman. Oh, it
John Willis: isn't? Isn't that biased?
Dr. David Woods: No. Kahneman doesn't, Kahneman can't. The, the, the decision making frame can't deal with this because ultimately the decision [00:57:00] making frame reduces the dynamics of the system in order to concentrate on a point and say what makes a difference in a decision that changes the direction in this little envelope.
John Willis: Okay.
Dr. David Woods: We look more expansively at the temporal flow, the temporal flow of keeping pace. Okay. It's time, it's relationships over time, it's over time. Synchronization is how do I come into and play together in new ways as I approach saturation, as load gets bigger, as stress gets higher. How do I synchronize and play and integrate to have a bigger effect than any one party can have?
John Willis: So this is where I play key up, keep up. So, but, but bias is a part of it. You're just saying there's a temporal aspect that expands.
Dr. David Woods: I don't really want to get into bias. Okay, all right, all right, fair enough, fair enough. I will say this on bias. The bias is, is it in the operator who [00:58:00] handles anomalies of sharp end.
The people at the sharp end who deliver resilient performance, even when everyone tries to beat it out of the system.
John Willis: Yeah, yeah, yeah. A
Dr. David Woods: misguided view of how their system works, right? The people with biases are the distant parties who don't understand work as done. Versus they have, they have a work as imagined view.
And that work is imagined view is a reductionist view. It's a static view. It's a categorical view is not a dynamic, adaptive view. I
John Willis: guess how I got there was
Dr. David Woods: workers are about verbs.
John Willis: So I guess how I got there was through like the idea of somebody having a hypothesis And then hanging on to it and not sort of adding like right but that
Dr. David Woods: the Fixation comes from a line of psychological research.
That's not condominant. Okay. Okay
John Willis: Trust me. I can argue with you
Dr. David Woods: on that. So it goes way back [00:59:00] into problem solving And this is getting at why the history of psychology and dealing with cognitive things before they called them cognitive problem solving had a different process than decision making.
Okay. Okay. Making was selecting options at a, at a point of, you know, the kind of froze everything in a moment and said, now, which way you're going to go. What options should you, can you generate some more options? Now, there was an overlap. There's a connection between problem solving was always dynamic.
You're in the middle of something. What's the next move? How do you put this together? How do you see threats? That's why it was, they used chess as adversarial,
John Willis: brilliant,
Dr. David Woods: not in the, not in the Simon sense, but in the original sense of how do you how do you use your bishops? Right. A bishop isn't attacking.
It requires room for [01:00:00] maneuver. You can counter a bishop. If you know your opponent likes to use the bishop to attack, you counter them by using pawns to block space. Right. And now they, now somebody is going to start with a opening to open their bishops and attack. The other one uses pawns to restrict space.
Now, how does the next one adapt to this? The adversarial game is one of adaptation and re adaptation. That's what was missed in people thinking about it as moves and computation. And my, my machine can count, can, can look ahead more moves. That's not how chess masters are thinking.
John Willis: They're
Dr. David Woods: thinking in adaptation and re adaptation, counter adaptation.
They're thinking in micro cycles of this. Going on at any level of, of matched skills between the adversaries in a, in a, in a chess game.
John Willis: So I think this is really cool because I think this gives us to, if [01:01:00] you sort of listen to the Eric Lawson one, you know, one of the, and you know, I'm sure you didn't have a chance to read his book, but the one of his arguments is that that is one of the problems with sort of the, sort of the inference version or the the sub symbolic or the neural network version is, They are, I think if I'm right, it's fixated.
They've been designed from a problem solving perspective.
Dr. David Woods: The, the problem here is, is I think, and you and I have talked about this before between a micro level and a macro level. And you know, when we move up to adaptive cycles at any scale, when we look at processes of growth and complexification.
We look at the need for future adaptive capacity because systems will become more brittle than their, than their stakeholders, developers, and realize. And so they need some form of extra adaptive capacity, what we call extensibility, graceful extensibility, and the discovery, the graceful [01:02:00] extensibility, this extra adaptive capacity is a universal requirement.
All systems have it. You have to have it in this universe. Otherwise you're too brittle. And remember, you know, if you have a brittle collapse, that's painful. There's penalties. You're injured. You know take the airline example. You know, airlines don't operate on a margin where, yeah, every few years we'll have an IT infrastructure breakdown.
Yeah, we'll lose 300 million, 400 million. Hey, okay, we'll make it up. CEOs get fired. Organizations get, have layoffs. They reconfigure because that's a major injury to them.
John Willis: Right.
Dr. David Woods: They may often survive. This airline still flies. For a variety of reasons. Sometimes
John Willis: I know one of the reasons that I won't say airline, but where you live has a lot to do with it, but
Dr. David Woods: we have the examples of ones that didn't survive.
Like the night capital brittle collapse due to runaway automation and an [01:03:00] inability to have a to keep pace with it and to synchronize across different roles and layers in the organization, let it, led to a delay in revising their policies on how to adapt to turn off trading, which is a really big thing to do.
But in foresight, that's not the way this played and not how it went too slow and stale, right? It was slow, stale and fragmented. And it didn't take long for the runaway trading to bankrupt the company in all for all practical, but my analysis
John Willis: and I didn't do a postmortem.
It was like, Lily, I'm missing comma created the runway runaway train. But, but yeah. So do we get this back to, like, AI? Like, I think we did, like, the model, you know, the model doesn't understand the world that, you know, that it's not part of, and as we leave to the, as we think about how do we, how do we, I don't know, deconstruct, but how do we think about all this now in sort of like where we are today?
Okay. Take the
Dr. David Woods: [01:04:00] three, a simple way to do this is three ways that adaptive systems can break down. And if we're the three ways that. Adaptive systems have to be provisioned to be adaptive. Okay. Excellent. These, these inherent existential risks. All right. And we'll start in the middle out.
Lauren Hochstein likes to remind us that due to growth, our systems always involve multiple parts that interact with the interactions matter. The interdependencies matter. That means they're not linear independent parts that simply add up in that case. Anything that happens in the system is a coordination issue, coordination synchronization issue, right?
Therefore, you must have ways to effectively synchronize and coordinate over multiple roles and layers. So, okay, so how do I translate that into some practical settings that I've been asked to [01:05:00] work on? So I've got it in the formal theories behind the theorems and empirical laws. Right. We've got formal comprehensive theories now too, just so people know that there's some, the science is getting more and more mature.
Well, let's get pragmatic. So we go, I'm going to introduce some new autonomous capability. It doesn't matter what technology generates it in a dynamic world. Well, there's multiple players in this world and there's other machines and they have some degree of autonomy and there's different human roles and they have different kinds of responsibilities for the overall system.
I'm working. So when I introduced the new machine autonomy, the question is, can it act as a coordinating partner in a joint activity space? Can it coordinate with others in a joint activity? Does anyone create AI then, Gold Rush 1 or Gold Rush 2 to meet that criterion? In both cases, the answer [01:06:00] is no. In both cases, the answer is you could.
And if you tried to, you wouldn't be designing AI infrastructure anymore. You're you're, you're point of, if you keep hiring AI experts, you really build the infrastructure is exactly right. You're not building solutions. You're building infrastructure. And to have a solution, you have to say, how does this play with others?
When organized activities are disrupted. And so we do a simple pragmatic test. We throw a disruption at there's an organized set of activities. All right, there's a small disruption, and then they make the important disruption occurs. How does the autonomy interact and coordinate and synchronize with others in the face of those disruptions and to resume a reorganized set of activities to pursue all of the goals of the different players at the different layers.
[01:07:00] How do you manage the growing combination of disruptions in a synchronized, coordinated way? It doesn't mean everybody has to understand everybody, but are you a collaborative agent in a joint activity space? And you can design this. Could you even use some of the power of AI capabilities to do it? Yes.
John Willis: Would the
Dr. David Woods: AI capabilities alone do it? No.
Is it, is that what else do you need? You need representation. You asked me and your list of questions getting ready. You asked about the power of represent, not the power of computation, the power of representation, right? And the power of representation is how do I see what someone else is doing?
How do I see what they're doing relative to the larger context that we're both engaged in in a joint activity space? How do I see the future? How do I see what's next? This was an old question. Another Wiener, Earl Wiener, another, another [01:08:00] in this case, mentored said in 1989, what are the three most common things when people operate a highly autonomous system?
Right? What are the responsible operators at the sharp end of the system? What do they say? What's it doing? Why is it doing that? And I wonder what it's going to do next. And if you can't answer those questions, you failed representation. If you don't help someone quickly at whatever the time constant of the process you're working, that's temporal factors.
Remember, time never stops. Time always goes, right? So if you can't answer that in the, in the flow of events, as things build and change, as multiple things threaten your cognitive saturation, your cognitive work saturation, all right? Then you haven't done representation. Guess what? You know, what happened in AI one, [01:09:00] GoldRush one, AI GoldRush two, nobody did representation.
And instead, in both cases, what they said was explanation will do it. Explanation. Well, what happened in explanation in GoldRush one? It failed. Completely. Completely.
Because it turned into more data overload. It turned into slowing down response in a time dependent temporal flow process. Because you had to pause and say, what are you saying? Is that what's really going on? That's what you think is going on? That's what you think is the important thing to do? Well, it might understand in part, but incompletely what's going on.
It might not have access to everything. It might be in it might be off base in one way, misaligned in one way. And for today's world, it might hallucinate. Oh my God. That'd be a lot of But anyway, what happened is it became another source of data overload [01:10:00] when the frontline people were already approaching saturation.
So the issue was, I can't, it's just part of the noise. It may be an integrated high inference, high value source, but it's not the answer. It doesn't solve my workload problem. I, it gets in the way of me keep keeping pace with events. And so. several groups that tried to build explanation switch to representation.
Now, interesting, good representations required some intelligence, required some computations. It wasn't simply a visual pattern. I needed smarts behind the visualization in order to create a representation that helped people see. What was most important to see what could happen next. So we call this Avery's wish.
If you look at the laws that we put together 20 plus years ago, we called it Avery's wish because Avery was a [01:11:00] real operator in a real high status. High risk world and because he was high status, technology company representatives would come to him and say, we can supply you with whatever you want.
Tell us what you need in our company. You know, we have the technology or will adapt it to meet what your needs to your organization. Your hospital will buy it. You'll recommend it. Tell us what you want. And, and he would get this over and over again. And finally, one day he told us, I finally turned to the rep, the tech rep and said, I want you to give me the technologies that gives me a picture of the operating room in 10 minutes, 10 minutes into the future, if you can show me 10 minutes into the future, then right, you've really helped me.
And of course the guy walked away chagrined. But that's what's essential. Can you anticipate? Can you see ahead? Notice we've hit on we've hit on all of the essential pragmatic [01:12:00] high level ingredients for resilient performance. Anticipation, synchronization revision learning. Reframing is the ultimate form of learning.
They're the irony that we now have machine learning and it can't reframe. What's the ultimate conceptual, you know, what's the ultimate, and they say, Oh, they say AI Gold Rush two is going to help science. Wait a minute. What science advances at points of reconceptualization, reconceptualization is reframing, you know, paradigm shifts, reframing, reconceptualization.
It's hard. Oh, by the way, people have studied science and reconceptualization. We can talk about the ingredients that go into it. Are those ingredients gold rush two or gold rush one? No. Could the technology powers be manipulated in a way to support those ingredients to reconceptualization? Yes. Are they by themselves sufficient to support reconceptualization and reframing?
[01:13:00] No. Is reconceptual and reframing hard? Yes. Even when it's supported and it's unbelievably hard if you don't support it. And then we go, well, it was a genius who did it. No. We've studied insight. We've studied reconceptualization. People can do it. We understand the conditions. We could help. That builds adaptive capacity.
Because if you don't reconceptualize, you're stuck in one view of the world. When the evidence is coming in, you're in a different world. You need to adapt your responses. You need to adapt who you work with and how you work with them. The world is different. And what's the message our world tells us every day in the news with this event or that event, this new technology or that new technology?
There is surprise, there is turbulence, there is change all around us. The world is not quietly saying, Hey, there's some, you know, things are getting a little different. It is yelling as [01:14:00] loud as though one thing we know for sure is the world is changing. Now we don't know exactly where and how the changes are going on and what parts are driven by geopolitical by gold conflicts by new capabilities and medical treatments.
new capabilities in technology, new forms you know, all of these things are in a mix and the world is full of events that shock them. And it's saying, you need to change your model. You need to think differently. And the AI ers are going, Hey, this will do it. And the answer is no, you're part of the shock, right?
You're part of the turbulence. You're part of the growth and you're all a ton of complexification. And what do we see it in? It's right there in front of us. AI slop and shitification, hallucination, model collapse, [01:15:00] right? They themselves, in looking at their technology capabilities, have all these ingredients that say, we actually create messes, not just, you know, capabilities.
Our capabilities aren't a solution. Look at the messes they create as well. Look at the scale effects. Look at the way it undermines the growth of cognitive skills in people. I get easy, quick answers in class. Do I actually spend any effort to learn and conceptualize and in areas and reconceptualize as I learn?
You know, does it produce expertise or does it undermine expertise? Does it reinforce old ways of one way of thinking which will eventually become stale? Yes. Does it help you revise? Here's the way of thinking. Let's revise. How did it get trained? It got [01:16:00] trained on on stuff. I mean you know, it is, I mean, the process of, you know, people have can dissociate.
And the AI Gold Rush 2 is an enormous example of dissociation. Because you've got people screaming on the one hand, it's the end of the world, and screaming on the other hand that we're we have the, the birth of absolute intelligence. I don't think AGI stands for artificial general intelligence. It stands for absolute general intelligence.
It's, you know, this is delusional,
John Willis: right?
Dr. David Woods: You understand the fundamentals. There are three things, there are three first principles that drive the law, the science, the new theories, the theorems, and the laws, right? There's always finite resources, so there's always trade offs. Change never stops, and others are adapting all the time.
You're in a world with other players. [01:17:00] And they're adapting and those players aren't just at your level. There are there are layers below and layers above as well as layers around and and so in those three out from that emerges the fundamentals, the fundamentals. There's no free lunch. Anything you gain.
There's a loss somewhere or some at some other time. There's actually a new version of it just came out this year. That's even more powerful about the ways that computation. Is inherently limited. So this applies to all of AI Gold Rush 2 computations as well, that the computation, all right, no matter how well you organize it as an inference to whatever criteria you're trying to optimize, that it has a cost.
It has a resource cost that can't be included in the computation. Okay. Right. Finite resources matter and will create [01:18:00] tradeoffs, robust yet fragile, right? You get better and growth leads to better and better performance, right? But the complexity have leads to brittleness so that it is so that signs of improvement, improved productivity, reliability are punctuated by sudden brittle collapses.
We rationalize the brittle collapses away by attributing them to local factors. individuals by oversimplifying and linearizing what are in fact non linear and complex systems. What do the biologists tell us? This world is non linear. It is never linear. It's never complicated. It's never simple. It's always complex because it's always non linear, right?
It's always undergoing change. There's always trade offs to be managed. Now it can be quiet. And we can isolate pockets temporarily and make them linear. [01:19:00] We can be reductive and our western science scientific management of reductionism has produced what capabilities that stimulate growth and complexification.
So where are we now? You can't hide anymore. Isolation, isolating parts off is still nice, but that's kind of playing off to the side. As soon as you take what you learn and want to inject it somewhere, sorry, the other rules kick in. It's not like those rules are off there. No, no, it's not unknown unknowns.
It's not complexity is only some of the time and it's complicated other times. No, no, no, no. The complexity is always there. It might be quiet for a while. Right. Right. Right. You might be able to build some barriers when it starts knocking on the door. But it will bang, it will open the door. If you put up walls, it will knock it down.
And that's what the world is telling us in [01:20:00] accident and shock and, and change event, whether they're environmental, whether they're technological, whether they're geopolitical all around us. So what this means is shift your focus, reframe, should we pursue these sources of growth? Great. You know, devote a little resource in some other directions.
Don't rush quite so fast. You can still go quickly, but slow down and think a little, integrate with some other stuff. What stuff? Think about this and the world of adaptation, the need for future adaptive capacity, this graceful extensibility the need for new kinds of architectures that go beyond the new capabilities we're generating, new architectures that build on the software and infrastructure developments and, and In communication and computation that [01:21:00] connect us together and lead us to see so many things and bring inference of various forms inductive deduct pseudo semi deductive abductive, bring these things together.
But in the end, it is serving not just human purposes, but the need for human players to be able to adapt in a changing world, despite the fact that conflicts will arise, despite the fact that trade offs are universal, despite the fact that your models will always be partial and incomplete.
John Willis: You know, I think you know, I'm trying to, like, tie this back to some of the conversations I've had with Mark Burgess, who, when he talks about sort of the, I forget the, the actual term they use for these sort of, like, these things are independent of scale, the physicists thinking about, and, like, and, are, are what you're talking about from the adaptability of these three principles, like, they're, they're universal, and why don't we just pull them along, you know?
Every time there's this gold rush of fluorescence, right? [01:22:00] And that's, that's the buffer. Yeah, yeah,
Dr. David Woods: this world, you know, the foundations behind resilience engineering is trying to translate understand more through, through, through empirical and practical action that stimulates our knowledge so we can understand fundamental.
We're not just being clever. Bottom up chaos engineering. We then go, Oh, that, that teaches us something. What are those fundamentals? All right. That means we now know how to deploy those fundamentals in different circumstance and different details, but to the same end, right. And that applies regardless of the technological infusion, right.
Of this type or that type that brings new capabilities, stimulates change, stimulates growth, stimulates complexification, right. When you produce growth and change and complexification, what does it trigger? The need for adaptive capacity? It says it will, [01:23:00] brittleness will rear its head.
John Willis: Yeah, yeah, yeah.
Dr. David Woods: Lag will matter.
Saturation will matter. And we need mechanisms to deal with it. The way we dealt with it before needs to be reinvigorated. And revitalize. And we can do that better if we understand what the fundamental principles and targets were after the, the, the thing we can't quite deliver on yet is the meta architecture.
But it says independent of the new technology that's being inverted and its capabilities, independent of the turmoil that the world is throwing at us, the level and types of turmoil. Here's the biological architecture that allows. That supports being good. Now, I need to be more. I need to be faster, better, cheaper now, and I need to be well prepared to adapt in the future when I recognize things are changing so I can reframe [01:24:00] reconceptualize, revise, synchronize in new ways at new times in order to keep pace with stress a the biological world and we have biological examples of success at this.
Thank you. What's one of the interesting things is at different levels we have it in terms of gene expression. We have it in terms of plasticity. We have it in terms of energy use in the cell. We have it in terms of the cardiovascular, healthy cardiovascular system. Often these some of these have good mathematical models of how it works.
And all of these are cross scale, highly adaptive systems that are cross signaling. There is no central authority. There is no central decision maker. There is no command hierarchy, right? These are multi level cross level, but, or excuse me, not level layer. It's important to emphasize using the word layered because [01:25:00] the standard levels inevitably take us into a hierarchical strict strictly identifiable levels.
And the answer is, It's a tangled lay set layers in our networks,
John Willis: right?
Dr. David Woods: They have great modularity. Some parts are more tightly integrated than other parts. And what integrates and synchronizes these is signaling. Rich signaling, right? And different ways to read the signaling. What kinds of signaling matters is how I'm reacting to stress.
So that as I approach that, so you can do your thing and I'll do my thing, but as one part of us approaches saturation is under stress. The different units adapt to those signals of stress, even if they're not the ones directly stressed. But only indirectly in order to provide what extensibility in for the unit being stressed and for the conglomerate that would suffer [01:26:00] if that unit has a brittle collapse.
John Willis: Yeah,
Dr. David Woods: that's all through the biological world. And guess what? We're part of the biological world. And so there's examples in human social systems, human human technological cognitive systems. We have success stories everywhere. The problem is in the human technological social frame, we don't have architectures that guarantee we will adapt, right?
In ways that provide this extensibility or like when
John Willis: you said, I think you said we don't have like a sort of like all the science is there and maybe a smaller group of people like you and that are like putting all this together and it's so new. We don't really have a sort of a. Clearer meta framework.
I mean, it, it, it's taken me a lot of work just to keep up with you to make sure I'm, you know, so but, and I don't want to go here now, but I'm going to go back and listen to some of the discussions I've had with Mark about how you talk about the biological, what we learned there and [01:27:00] what we have, but the physics has a very similar sort of.
A notion of scale being independent and, you know, like, in other words. You can use these same principles for any size city, or they're sort of, and, and there is, it sounds like there's in the physics world, there is more of a mature, you know, meta framework, but, but again, that would probably take us another hour to kind of go down that path right now.
Dr. David Woods: I'm afraid we've gone over time and I don't know how you're going to cut this down. Yeah,
John Willis: no, I'm going to cut this one half. And so I was going to say, this might be the part we probably should stop. And just for the listeners, I had a list of eight items. I think we covered two. So I'm hoping you're up for sort of continuing this because I, I, like the one I wanted to really go is.
This is really tease out abductive reasoning and, and, and, you know, cause I think there's like a lot of fascination to some of the prior conversations that we've had definitely conversation to have with Jabe and Eric Lawson. [01:28:00] So, maybe now's a good point, given this will be 2, 45 minute sessions and then if you're up for it, we'll just, you know, as often as you're willing to do this, we'll just schedule.
Dr. David Woods: Well, one of the ones I was suggesting was jumping on the, is the more historical bent on abduction that. Bloom did,
Taking that forward into the you know, we actually study people really doing this and we really have empirical results and we really have, you know, real lessons about how this works and what makes it hard and what, what you can do to make it effective.
And and it doesn't work. It doesn't, it doesn't quite work the way Peter said it works. Okay.
John Willis: Okay. Okay. I
Dr. David Woods: mean, I mean, he, it doesn't say he has stuff that's wrong.
John Willis: Right.
Dr. David Woods: Right. It just says it's incomplete. And there's more stuff to it. And that stuff is different. And it [01:29:00] comes from a different worldview, and it is, you can't, and this will be.
In some ways, we'll get back to induction versus deduction and abduction, because in the end, we're going to go, we're going to talk nouns versus verbs.
John Willis: And just for our listeners, that's how I started off my first question. You're like, well, John, it's not that simple.
Dr. David Woods: Well. I can, I can make, you know, I can, we, we try to boil it down so that it's a distillation.
But the problem with distillations is that if, you know, if you haven't already acquired the taste, it's o it can be overpowering and shocking.
John Willis: Sure. Yeah. Yeah. No, I, I
Dr. David Woods: agree. Right. Because it's not the smooth it, well-integrated, it's like, boom, let me hit you with a different idea.
John Willis: Right,
Dr. David Woods: right. But the, the power and the potential is there.
But right now the gold rush blinds people to anything, but running faster in the direction of gold, right? In one direction. And everybody's running it in the same direction. And when [01:30:00] everybody runs in the same direction, one of the things you should do is diversify your, your investment portfolio, this is an old rule.
And there's some good scientific basis for it, by the way. And it's got a page's worth. So if everybody's running in one direction, diversify, because there's already enough resource going that way, so diversify and invest in some other directions that have some connection or influence or, you know, if this really succeeds, what else needs to be done?
If this thing runs into problems, what else would, would overcome that bottleneck? So you need to diversify your investment portfolio. You don't have to put a lot of resource into it, but Diversify. Right now, everything is going in one direction for one thing, and that inherently will turn out to be unstable.
John Willis: Right. Exactly. So I think we did a good job. I think we did you know, we really sort of gave the, the, the gold rush, you know, where's this sort of adaptive capacity that we're sort of not in the goal rush and the sort of everybody's running in the same direction I [01:31:00] think you've laid out you know, I really sort of try to advance the show notes.
And then I, I like the idea of maybe the next one, like let's flesh out you know, what is the sort of new science behind Pierce's, you know, abductive reasoning and then sort of circle that back into AI or not AI and, and we'll get that, get that one on the schedule. And you know, maybe I'll think about whether we should just do one together or maybe invite or I don't know.
I'll just kind of think through that one. I don't know.
Dr. David Woods: I think it would be good to talk with him, but I think we need to record this other one and have him listen. Okay, there you go.
John Willis: That's it. I'm starting to think I'm going to now start like doing a recording with somebody really smart, send it to somebody else who's really smart, say listen to this, now let's talk.
Dr. David Woods: Yeah, I think so.
John Willis: Pretty good idea, right?
Dr. David Woods: That would be fun. This is great. for inviting me,
John Willis: John. Oh, so much for, I mean, honestly,
Dr. David Woods: I'm looking forward to the book.
John Willis: Yeah, yeah.
Dr. David Woods: It's now called rebels of reasoning.
John Willis: Yeah, it might be called fluorescence, but but the yeah, [01:32:00] but no Yeah, the the idea was that as I kept what am I good friend?
Damon edwards. I know you've met him a few times big part of the devops community He he would I would tell him the stories and it's kind of interesting because one of the things we we touch on a little Bit is that fluorescence? It's not like everything the the ai winter. It's not like everything goes away.
It continues, right? And You And, and so there was this idea that these people tend to be rebels. So, you know, the Walter Pitts and and the Faye Faye Lees and the Hinton's, you know, we see them now, we see like Hinton as this God, but like he went through a whole period where people were like, what in the heck are you doing?
Why are you working on that crazy stuff? It's never going to
Dr. David Woods: be to some degree. It was just, it was just the sense of all, all of us were doing our thing in a noisy, multi threaded.
John Willis: Yeah,
Dr. David Woods: of science. And why does any one thread stand out more than another in these periods of fluorescence? You see this concentration and taking off and all the [01:33:00] attention and resources go.
And then you go, Oh, what fueled this? And then you go, Oh, look at that work and this work and this work, but at the time, it's always just people doing their work along with a whole bunch of others, many of which turn out not to be important to what happens next. Some of which turn out to be crucial, some of which turn out to be, wait a minute guys.
Don't, don't just do one, go down one path. There's more than one path to pursue here.
John Willis: And a lot of them are really interesting, you know, one of the things I did in my Deming book, which was really trying to model a little bit of a Michael Lewis like storytelling, and all these people, even though sort of the, the arcs are the sort of neural networks and the subsymbolic, the symbolic, the history of those But each of the humans have these like fascinating like the whole Norbert Wiener and and and and McCullough and and John McCarthy and like there's just good sort of human Drama in [01:34:00] that in there as well
Dr. David Woods: That's one of the things of science in part we're supposed to be about reason and cognition is You ever, you ever been in academics?
You ever been in research? It's not cold. It's conflict. It's personal. It's fights. It's, I mean, sometimes, sometimes my close professional friends are sometimes my biggest headache.
John Willis: Yeah, well, you know what?
Dr. David Woods: How in the world did this happen?
John Willis: Well, they said that, what was it, Lawson said that Pierce's work, he was such, it's apparently such a disgusting human that like Harvard locked up his research for 50 years.
Like, how bad of a person do you have to be to have your research faulted? Anyway,
Dr. David Woods: a good reason for us to mispronounce his name.
John Willis: That's it. You go. Well, good enough. We'll get this another one scheduled probably after the holidays or maybe one before the holiday and we'll see pending your schedule, but we'll we'll see [01:35:00] So anyway, this was great.
We'll just figure it out schedule-wise and thank you so much my friend All right,
Dr. David Woods: take care.