For the next 20 minutes, I want to introduce you to a little bit of theory, and it might come across as somewhat abstract, maybe extremely abstract, but I have found some of these concepts extremely useful in understanding why the old paradigms and our default problem solving and sense-making methods often don’t work. I’ve never seen anybody try and teach introduction to Design using the Cynefin framework, and there’s probably a good reason for that, but what I’m about to describe can be useful well beyond the realm of problem solving and design.
There’s this story that the head of Enterprise Design at IBM tells in a video for Think JSOU. So Enterprise Design is IBM’s version of Design Thinking that they developed in conjunction with Stanford’s D.School and the design firm IDEO – these are some of the leaders of the design thinking movement over the last several decades…
He tells this story about a project that IBM was contacted about. There was this mobile kiosk for Airline employees that rolled around to different parts of the terminal and was expected to be used for checking customers in at the gate.
And the employees weren’t using this mobile kiosk effectively, which was problematic for the Airline company. They presumed there must be something wrong with the software or the design of the kiosk and it needed to be updated
IBM is in the business of computer kiosks like this, but before they began work on creating a better product, their designers started by spending several weeks just observing employees…
And through observation, they discovered fairly quickly that what was going on wasn’t primarily about the software… or the hardware…
What they observed were primarily female airline employees, who had just gone through a corporate rebrand that put them in form-fitting pencil skirts… and while the mobile kiosks were capable of being plugged in, the gate attendants weren’t plugging them in, because their uniforms made it difficult and uncomfortable to bend down and plug them into the outlets that were so close to the ground.
They discovered that there were these other factors at play, completely outside of the software or hardware that were preventing the desired behavior around the mobile kiosks.
Ultimately they ended up designing something that kept battery life, speed and accuracy in mind, and leveraged IBM’s partnership with Apple to create a iphone-based solution which didn’t require being rolled around or plugged into the wall.
So there’s a few things about this story that I think are powerful and important.
One is the fact that a software company acted like a bunch of anthropologists for several weeks in order to identify the problem here. They sat back and observed. They took notes and did ethnographic research to glean insights about the behavior of airline employees, not only behavior around this kiosk, but their whole workday.
The second is the fact that the problem existed well outside the domain of software but was still something IBM had to be concerned about.
This story tells us a lot about good design for humans. Good design incorporates facts about culture, emotions, the environment, background, how we feel and how that impacts our behavior. Design is intended to elicit a behavior. In this case, the behavior was “use the kiosk”, and the elements that affected that had absolutely nothing to do with the software and only partially to do with the hardware.
So how do we know what elements are most important? The short answer is “we don’t”. We can’t actually know that in advance when we’re designing things for human beings, because human beings and the factors that motivate them to take action, are complex.
I’d like to introduce you to a sense-making framework called Cynefin.
It’s spelled funny because the word is Welsh, and in Welsh, Cynefin means “the place of our multiple belongings” or “habitat” maybe…
I’d like to describe design failure through the lens of Cynefin and that first requires me to describe how it works.
Cynefin depicts three types of systems- ordered, complex, and chaotic
Ordered systems are on the right side of the framework, described as clear and complicated.
Ordered systems tend to be human-made, closed systems. We can account for all relevant factors. What happens within the system follows a consistent and repeatable path. They are tightly contained and controlled, which allows us to know cause and effect in advance. They are also only the sum of their parts. If we take them apart, move them to another location, and put them back together, they will be the same exact system again.
For example, a calculator is an ordered system in the clear domain. I know exactly what I will get if I type 2+2. If I know inputs, I can determine outputs in advance.
The steps for making decisions in the clear domain are “sense, categorize, respond”. There is a category – addition – that dictates the best practice for using the calculator. I don’t have to wonder if this time, it would be best to push the minus button or if I might have to push the buttons in a different order. There is just to know the category and take the necessary steps.
This is the domain of best practice. There is one best practice for each category.
A complicated system, while still ordered, requires some degree of expertise to navigate. So the steps there are slightly different – sense, analyze, respond.
An example of a system in the complicated domain would be an aircraft. It’s a linear system. It is the sum of its parts. If there’s something wrong with the system, I should probably call an aircraft mechanic, and they will be able to use expert analysis to diagnose what’s wrong.
If we took an aircraft completely apart, moved it, and put it back together, it would still fly. It is entirely the sum of its parts.
Much of medical practice actually fits within the complicated domain. Doctors go to school in order to be capable of sensing, analyzing, and responding based on a diagnosis.
Solving problems in the complicated domain can employ methods like systems thinking or systems engineering. You can map out the whole system and use methods like theory of constraints to increase efficiency. Taylorist management works in this domain – efficiency-seeking factory management techniques. Also, checklists can serve an important purpose here, as Atul Gawande describes in his book Checklist Manifesto- they can help humans perform complicated tasks that stretch their cognitive capacity and reduce common errors by reducing cognitive load.
The complex domain, however, is unordered. We cannot determine cause and effect in advance. The system is usually open, and there are too many factors, constraints, and agents in interaction to keep track of. These factors might also change in their level of importance within the system.
An example might be history – unexpected events can drive historic developments. In the book Team of Teams, General Stanley McChrystal describes the complexity of the modern era with the story of a fruit cart vendor in Tunisia who set himself on fire in December of 2010- an act which sparked the Tunisian Revolution and the wider Arab Spring. This could not have been anticipated in advance, but cause and effect can be determined in retrospect. We can develop theories about the future in complexity, but forecasts are unreliable.
Complexity is about dispositional states. The state of the system right now and it’s adjacent possible states, and the attractors and modulators that might nudge us in one direction or another.
For example, if IBM had improved the software of the kiosk without consideration for the rest of the system that it occupied – a system that included culture, branding, feelings, and perceptions – they likely wouldn’t have solved the problem, because the system had other attractors within that prevented the desired behavior.
In complexity, the steps for making decisions are probe, sense, respond. You have to interact with the system – you probe it- to test it’s current state, and based on the resulting understanding of the disposition of the system, you respond quickly because that particular systemic state… that configuration might not last long.
Another example of complexity would be a team of seven people trying to accomplish a task or develop something… perhaps a piece of software…
In the past, software development was done with a method called waterfall. Projects began with the collection of a massive list of requirements. It would often take years before a piece of software was completed and delivered, and by the time it arrived, millions of dollars had been spent, and frequently, the software didn’t work as intended.
The book Scrum begins with a story about an FBI software project that cost them millions, took like 2 years, and failed miserably. They didn’t end up using it at all.
Scrum is a method of product development that seeks to deliver value as quickly as possible based on current user insights, the basis for the Agile movement, Scrum takes into account the complexity of developing products -including the internal complexity of managing a team trying to tackle the complexity of designing and developing a product – by breaking it down into sprints and using new social technology like daily standups and retrospectives… which, interestingly, are ways of probing the state of the complex system of the team to sense when the system has shifted, or new information needs to be responded to.
Human Centered Design emerged from the fray of new social technologies and methods I think in parallel with the Agile movement- this movement of adopting better social technology to deal with the actual complexity both of the problems we face and the internal complexity of teams of human beings working together. Effective problem solving isn’t just about doing the right steps. It’s also about employing them effectively, in a way that works for the people involved. It is complex. It’s not just an ordered system that you can adopt. It takes the framework but requires the right mindset, openness, transparency, a certain organizational culture, psychological safety, and good facilitation. It can be both uncomfortable and rewarding…
So the reason I talked about these different domains of the Cynefin framework is because I think there’s something powerful about understanding what types of solutions work in what domain, and what happens when you treat a complex problem like it’s merely complicated…
Well in Team of Teams, General McChrystal describes how the best practices and expert approaches developed over centuries of conventional warfare failed in the early years of the Iraq War – you can’t treat a complex environment like it’s merely complicated, and the solution that he describes is about how the Joint Special Operations Task Force became more complex itself, became capable of adaptation to the complexity they faced against an insurgent force. In complex problems, the solution is often about becoming something as much as it is about creating something. It’s not enough to just teach people a five-step process to solving complex problems. What we need most, as Karen Petty Hold and Jeanne Liedtka describe in their upcoming book “the innovator’s journey” is the opportunity to grow in ways that make us capable of navigating that process.
I had a daughter who was highly disabled and on hospice. She was dying. And I received an assignment to Hawaii in 2013 I think… and the program responsible for ensuring special family members get adequate care, the Exceptional Family Member Program denied my daughter travel. I was told that I was going to have to live apart from my family for 3 years at a time when we were certain she was less than a year from death. My experience with that program and solving that problem during that period time was very much one of interacting with an ordered system, with people simply processing requests like factory workers on an assembly line, in the way that policy rigidly told them to, but which failed to account for the degree of complexity it was facing, and that caused us a great deal of pain and resulted in me thinking differently about the way we need to manage ordered systems that might end up violating our values and hurting people.
A lot of government and corporate programs are like this- they set up these ordered systems that require heroes to swoop in and rescue people who are being eaten up by the machine that was designed to help them, because they just don’t fit the system, because things weren’t as predictable as we thought they would be… I have talked to countless people about the Exceptional Family Member Program over the years, and it’s a great example of a system intended to help people, but historically, it’s impacts have often been inadvertently harmful. I should add here that in recent years, I’ve spoken to a number of people who are engaged in trying to make that system function better.
Another example I like to use is the Air Force’s response to issues of resiliency, mental health, and suicide. Most of the visible responses we see at a programmatic level seem to assume the system is in clear domain – they’re best practices: Do yoga, have good sleep hygiene, exercise regularly, etc… But best practices don’t actually work well when you’re dealing with real mental health crises that require intervention… or with catastrophic life circumstances. If you’ve ever dealt with trauma and then had to sit through a briefing about sleep hygiene, you might understand what I mean. The people who are committing suicide are likely not doing it as a result of poor sleep habits, but as the result of a complex or chaotic combination of factors that most leaders aren’t currently equipped to address head-on. The issue is highly complex, and in complexity, we probe through narrative gathering- through conversation, like how talk therapists do… Sometimes the best intervention leaders have, because of the rigid constraints of our policies about seeing mental health professionals… is to refer people to the chaplain, who has perhaps the least rigid constraints on what they do with and for people in crisis, but it doesn’t take much to imagine why some might hesitate to use a resource who wears a Christian cross or any other specific religious symbol. I don’t say this to denigrate religion, but it adds a thick layer of constraints, which often prevent the desired behavior.
I have a close friend in the Air Force who is gay and struggled with her Christian faith in the days before don’t ask don’t tell was repealed. To this day, I think a lot about the experience that she had, and how it reflects on our still existing configuration of resources.
It all comes down to the level of constraint that’s appropriate for the domain you occupy. You can’t run a war like it was a factory. You can’t address mental health issues like they were medical issues. Depression isn’t like an amputation. You can’t run a factory like it’s a garden. You can’t run a software development team like it’s a factory.
This is why in the complex domain, I recommend opting for metaphors that are organic in nature. Think ecosystem rather than machine, and if you have experience with gardening, you shouldn’t get caught up in thinking that things are more predictable than they actually are.
Sometimes, what we’re designing is a calculator. We’ve determined the context we want people to use our product in, the types of tasks we want them to be able to complete, and we can focus in on the experience they need to have when using this closed system in a controlled environment that will always work the same way every time. We can focus entirely on the User Interface problems of usability and aesthetics.
But unfortunately, when designing anything for humans, you don’t actually know what domain your product will ideally end up in, because if it’s complex, you couldn’t tell in advance whether someone might want or use your solution.
As illustrated by the IBM story, mostly what we have at the outset is unknown unknowns. We might have theories about what the problem is, but we have to set those aside, set aside all our premature solutions, and step into the complex environment that the problem actually exists in.
In the complexity of the design process, we begin with probes- conversation, narrative gathering, ethnographic research, and sense the current state of the system, and from that we develop hypotheses about what will create the intended value or provoke the desired behavior, and then we test every assumption along the way for validity- more ways of probing to sense what comes next.
Human Centered Design, Design Thinking, Agile Development, Lean Startup are well formulated processes for identifying key insights about the people we’re designing for and their problem environment. They start by assuming there are a number of unknown unknowns that require exploration and divergence to get a grasp on, and then they iterate forward in learning loops of build, measure, learn to create and contain a stable system of value creation.
I hope this has been informative for you. The key takeaway here is that our default paradigms for problem-solving often fail to take into account the necessity for observation and interview, and the resulting products and systems that we build often make the assumption they occupy one of the ordered domain, and complexity they face results in design failure.
Like I said the Cynefin framework is useful for applications far beyond just the realm of design, but I find it useful for grounding us in a mindset that context is key- design always begins as complex, so simple, ordered approaches will often fail, and the solutions we build might also need to be complex themselves- capable of probing, sensing, and responding themselves somehow… .capable of adapting as their context inevitably changes…