When we solve interaction designers, we can get lost in complexity. But the problems aren’t hard. We’re making them hard. In an effort to support loose, experiential behaviors of our users, we’re designing systems that sacrifice elegance in favor of power.
Consider a simple interaction design problem.
Imagine a second-grade teacher. Her students take tests in her class. Test one is about animals. Each student submits their answers: Jim submits his answers. Mary submits hers. Nancy submits hers, and so-on.
Test two is about vehicles. Again, each student submits their answers: Jim submits his answers. Mary submits hers. Nancy submits hers, and so-on.
We can expect that the teacher would want to view how Jim is doing in the class overall by looking at all of his grades at once. In viewing Jim’s profile, she can see that he failed the test about vehicles, and in the same view, she can also see that he did okay on the test about animals.
We would also expect that the teacher would want to view how the class as a whole did on the test about animals. In viewing a list of the test grades, she can see that most of the students failed that particular test.
These are simple behaviors with simple design solutions.
Now, imagine this scenario.
This morning, the class took a test on sports. Later in the afternoon, the teacher wants to see how they did. She views the scores of all of the students, in a list. One student stands out: Mary failed the test. The teacher can’t remember how Mary is doing in the class overall, so she chooses to view Mary’s entire set of scores. It looks like Mary did poorly on the vehicles test from last month, too.
The teacher wonders if the vehicle test was too hard, so she views everyone’s score on the vehicles test, and sees that Jim failed, too. Is Jim performing badly in the class overall? She chooses to view more detail about Jim. It looks like he’s doing okay.
Having satisfied her curiosity about Mary, Jim, and the vehicles test, the teacher wants to continue looking at the grades from the most recent test on sports – the place where she started.
Ouch – it just got complicated.
The teacher went down a mental rabbit-hole, which is something we’ve all experienced. Our mind travels and our goals are temporarily suspended as we explore. In the teacher’s mind, she can “go into” either the test or the student once, twice, or as many times as she wants, and in any order she wants, and then “come back out” to where she started. She’s jumped from Sports Test Scores to Mary’s Test Scores to Vehicle Test Scores to Jim’s Test Scores. But her starting point – Sports Test Scores – is salient to her because the test occurred just this morning. The rest of her exploration was a side-trip.
The rabbit-hole behavior is exacerbated if the student belongs in multiple classes, because the teacher may want to then jump from Jim in class 1 to Jim in class 2, to exam in class 2, to Mary on exam in class 2, to Mary in class 3, to Jim in class 3, to Jim in class 1. And the problem is even further complicated again by group assignments, team teaching, and so-on.
Her behavior can be considered phenomenological. This means that it is emergent and unplanned, ill-formed, and negotiable. She may have started with a clear goal, but she ended in a completely different place than what was expected. Our design problem, which originally appeared to be to support the teacher’s goal-directed behavior, has expanded. It seems that we need to encourage phenomenological rabbit-hole behavior.
One way to support this experiential behavior is to use a modal overlay window. When a teacher is viewing the list of student performance on a single test, she can click on a student. The student’s performance on all tests opens in a modal window. When the teacher is done looking at the information, she can close the window.
The overlay window creates a sense of transience, and helps the teacher remember where they were previously. But this solution has a problem. The teacher can’t pivot to another test; she has to go into the student, remember the test she wants to view next, go out of the student, and into the next test. It’s a stress on working memory, an extra step, and doesn’t satisfy the real things a teacher wants to do.
A “solution” to this approach is to use a modal-on-a-modal, which most interaction designers will reject out of hand (and for good reason.)
This is a style of interaction design problem that designers run into over and over: bringing a many-to-many database relationship to life on a screen in a simple, usable, and powerful way. There are many other solutions, and while all of them solve the problem, none of them do it in a particularly elegant manner. None of the solutions are satisfying. They try to support the rabbit-hole exploration, but all fall short and place some sort of usability or cognitive burden back on the teacher.
Impossible problems have unsatisfying solutions
The problem presented above is an impossible problem. It is impossible not because it can’t be solved, but because it can’t be solved in a way that is clean. The solution will have recognized shortcomings, and addressing one shortcoming forces another to come into focus. There’s no way to whack all of the moles, and that lack of finality or refinement drives designers crazy.
Some impossible problems follow a set of established patterns. One is the grey checkbox problem, where a list can be batch-selected to be true, but then one item can be individually set to false (resulting in a list that is “mostly” true.) Another is facetted browse, where facets within groups broaden search results (selecting “green” and “red” means “show me things that are either green or red”) but facets across groups narrow search results (selecting “green” and “small” means “show me only items that are both green and small.”) Another is product configuration, where making a choice (“I would like the sunroof”) means force-deselecting a non-compatible choice that isn’t on the screen at the same time (“If you have the sunroof, you cannot have the alloy wheels, and if you have the alloy wheels, you can’t have the vehicle in blue, and if you have the vehicle in blue, you can’t have the sunroof, and…”)
These problems – obnoxious on a computer – are generally insignificant to address in a non-digital environment.
If you had twenty items in front of you at a store and put all of them in your basket except one, it would be very clear which were selected and which were not. The cart, shelves, and items don’t need to do anything special to support this. This is the grey checkbox problem.
If you asked a car salesman to get you a blue car with a sunroof and alloy wheels, and he said “We only have these three models in stock, and they are right in front of you,” your next steps would be very obvious. The cars and dealer don’t need to do anything special to show you what’s going on, and it’s pretty clear that no matter how much you continue to say “sunroof and alloy wheels,” it isn’t going to happen.
“Solving” these problems in the natural world is fairly trivial. It’s when these problems show up in a digital context that they become impossible, because we start to design for hypothetical emergent behavior. We designers, who so frequently push back on edge cases, find ourselves optimizing for them. What if the user clicks on the “select all” checkbox, unselects something, moves to the next page, and selects another thing? What should happen? What if the user clicks the sunroof, learns about how the alloy wheels aren’t compatible, and then selects them anyways? What should happen?
Common in our solution is the unsatisfactory feeling of having made something that will have to just suffice.
A dance of local/system moves
Impossible problems are solved through a constant back-and-forth of small focused design moves and larger systemic design moves.
A “local” interaction design treats a user’s goal in idealized isolation, as if it is a thing that happens from start to finish without interruption. This way of thinking assumes that when I start to deposit a check, I won’t think about any other part of my finances until I’m done; or when I start to buy a pair of shoes, I’m focused on that transaction until it’s complete; or when I buy a car, I select features in a step by step, methodical manner. When a designer takes local moves to build a design solution, they are taking a positivist approach to the problem instead of a phenomenological approach. They are assuming that once a user has identified their goal, their approach to solving it is then reasoned and practical. In a situation like this, a designer expects to enumerate all of the aspects of the constraints on the goal prior to tackling the problem. It seems that if we conduct enough research with users and critically analyze what we learn, we can list all of the things they will need our system to do; then, we can design a solution to help them do those things.
For example, Rebecca – a junior designer – starts taking on the many-to-many impossible grading problem by listing the things a teacher will need to do:
- View the performance of a single student across all of the tests they have taken
- View the performance of all students on a single test
Rebecca draws a main menu, with options to View Students or to View Tests, mirroring the goals the teacher has.
She sketches the View Students flow. To view the performance of a single student’s performance on a single test, the teacher will first need to identify the student, then view that student’s test scores, and then finally select the specific test. Rebecca draws a screen with all of the students, a screen for one selected student profile, and a screen with one specific test for one specific student.
Rebecca has made a series of design moves in support of a local and isolated understanding of the problem, which rest on a series of assumptions about teaching behavior (that it is also local and conducted in a task by task fashion.) Next, she’ll repeat this design process again, taking moves towards another local solution: making it possible for a teacher to start with a test and view student performance.
This is a simple and effective solution, if we think about the problem only through a lens of individual, logical use cases. First the user does this, then they do that, and finally, they do that.
But as we saw above, that’s not how teachers think about their students and their assessments. In a classroom, teachers look for anomalous behavior and high and low performers. Their attention drifts based on what they see – one student reminds them of another similar, or opposite, student. They mentally group students in bundles with soft, organic, and always changing boundaries. They become distracted when they are, frequently, interrupted.
Should our system make a small claim of supporting two or three tasks, and solve for those tasks? Or should it make a large claim of supporting the teacher in whatever way she chooses to teach, and solve for that behavior?
If we do the later, it starts to seem like a solution needs to treat the problem of student performance as a system to be explored, not a set of jobs to be done. A “systemic” interaction design treats a user’s goal as phenomenological – something that is emergent based on the situation and reality of their experience. Goals, in this way of thinking, are not things that can be enumerated ahead of time, at least not in an exhaustive manner. They are things that come and go, where the specifics are fundamental to the experience.
Constraints emerge differently in local and system design solutioning
Designers establish constraints that shape the things they make. These constraints often develop so quickly that the designer doesn’t even know they are creating them. The style of constraint is informed by the way they approach local and systemic design problems, and how they think about the goals that users have in those local or systemic problems.
In a local and positivist view of a design problem, a designer can articulate a crisp view of the problem they are solving, as they are solving it: “I need to make it possible for a teacher to view the performance of all students on a single test.” This is the guiding frame of the problem. As they draw, respond, change, and re-draw, their inner judgement will be based on how well they are, or aren’t, helping the teacher to do that thing.
Through a systems and phenomenological lens, a designer will usually start with only a fuzzy view of the problem they are solving, based on a few perspectives on how their users think, behave, and feel: “Teachers want to help their students improve. Test scores are one way that the teacher can see who needs help the most. Teachers don’t have a lot of time.” This lack of crispness in problem definition is on purpose, because it gives the problem frame freedom to move. As they draw, respond, change, and re-draw, their judgement of “good” and “bad” is based on how flexible the solution is to the needs of the teachers rather than how rigid it is to solving one particular use case.
Rebecca’s solution was local and positivist. Imagine now that the shortcomings of this solution become apparent. Through sketching, she starts to build on top of the foundation she’s made. Instead of just pivoting back and forth between list/profile view, what if the teacher could quickly glance at each student from the test list view to see if their grade is consistent with their past performance?
And, what if the teacher could just as easily glance at each test grade from the student profile, to see if the students in the class are all performing equally?
Adding a little peak in both contexts seems like an elegant solution, because it’s a single interaction style that can be easily discovered and then understood as its reused. The peak is a local solution, but it acts as a system move.
But with a little more reflection, or with a little more interaction with customers, it becomes clear to Rebecca that when a teacher starts looking at one student and sees that he performed poorly on the test, and notices that another performed poorly on the test too, the teacher wants to go learn about both students at once to see what happened.
Rebecca goes back to local solutioning. The peak alone isn’t a sufficient screen for such in-depth interaction. What about an extended peak with an overlay?
It works! But.. it only works once. What happens when a Profile has a Test overlay, and the teacher then wants to compare it to a different Profile/Test on screen at once?
It’s back to system solutioning.
To support the rabbit-holes, this dance of big and small, system and local solutioning continues, typically until the designer gets frustrated or runs out of time. Sometimes the dance takes place in seconds, on a piece of paper or in a design program like Sketch. Sometimes it takes months, playing out through feedback and launch cycles. In all cases, the dance is about pushing a design move through a set of constraints to see where it breaks, and then changing either the design move, the constraints, or both at once.
The relationship between impossible problems, unsatisfying solutions, and a theory of problem solving
The late Herb Simon was a researcher at Carnegie Mellon University. He studied how people solve problems, and his research laid the groundwork for modern-day artificial intelligence exploration. If we can understand how people solve complex and poorly structured problems, perhaps it will be easier to design software that can make that form of problem solving easier.
One of the fundamental theories to come out of his research is the idea of bounded rationality. This theory explains that people act rationally to solve problems, but they do it within the boundaries they have at the given time. Since we always have some form of boundary – be it our own abilities, or the information we have at our disposal – our solutioning is never going to be optimal. It will always have some sort of deficiency. From a point of complete omniscience, we could judge a solution in comparison to all the others, but as mere mortals with only a view of our solution, our best try will have to be enough. This notion of a solution to a problem being satisfactory and sufficient, but not necessarily good, results in what Simon calls “satisficing.”
When Rebecca created a local solution to the problem, she outlined some user goals, and then designed to help someone achieve those goals. In a sense of bounded rationality, by approaching the problem locally, she constructed very broad boundaries. Put another way, she had few cognitive or knowledge limits on how to solve the problem. In the context of a local value proposition – “we promise you can achieve these goals” – her answer is probably much more pleasing, both for her to design and for the teacher to experience. Her solution was more than just satisfactory and sufficient – it was also good.
A designer who thinks of systems will come up with a solution that does more things, and supports a user as they fulfill their experiential needs, drifting in and out of goal-directedness. It will promise more. But in its promise will come complexity, and to manage that complexity, the designer will begin to let the problem grow. The bounds of their rationality are small and tight – they have large cognitive and knowledge gaps on how to solve the problem because they want to give the teacher room to do things that aren’t easily describable or knowable.
By supporting phenomenological behavior, they’ve transformed the problem into an impossible problem, and so their solution will probably be unsatisfying. It will have loose ends, edge cases that suddenly seem less edge-like and more plausible. It will provide lots of power and functionality, but at the expense of coherence. A systems solution promises more, and so the best-case solution is inevitably worse than the best-case local solution.
Simplicity is really about rejecting problems that require systems answers.
We’re tempted by systems approaches and complex problems. They feel good to work on. They feel important. They seem more connected to the way people actually behave and want to work, as if a solution to a systems problem will offer more intimate value to a person. All interaction design problems – not just the really gnarly ones – can be approached with a philosophy that is more positivist or more phenomenological, more local and fixed or more emergent and flexible.
All problems are systemic if we allow them to be. But not all need to be. And if we allow them all to be systemic, our work changes. To acknowledge emergent, phenomenological behavior, the way we frame problems needs to be looser, and while goals, scenarios, and use cases are helpful, they don’t lead us towards an interaction framework – only towards small tributaries off a larger pulsing and ever-changing body of human behavior. In this way, a naïve, junior designer has a leg up on years of experience, because they haven’t yet learned to think systematically. Their answers will always be local, and when they learn the craft of making elegant, crisp, clean, and functionally simple answers to what they construe as local problems, they will create satisfying answers to possible problems.
A local solution will probably promise only a small amount of value. But that solution will feel elegant in its simplicity, and will likely deliver on the value proposition. When we find ourselves confronted with an impossible problem, and we begin to draw a solution that feels only sufficient, we should stop and reflect. It’s likely that we’re the ones who made the problem impossible in the first place.