Artificial Intelligence (AI) has long been the stuff of science fiction — machines doing what humans do — in movies from Terminator to Blade Runner to her. When we step back and look at machine learning as a real, current tool (rather than a means for robot domination or interspecies love stories), the underlying theme is that by understanding how people solve problems, computers can learn to do the same. Incorporating new forms of technology and algorithms takes patience — and a willingness for designers to experiment and to get it wrong. What is a human’s place in this world of bots, voice and predictive interfaces, computers, VR/AR? Does it make more sense to just let the computers take over and design based on the data we humans generate?
Human solution-finding includes well-defined problems and activities, like a game of chess, that have specific and finite rulesets, which can be learned over time. Then there are ill-defined problems and activities, like software development. Some researchers consider that we can claim “true AI” when computers learn to solve those ill-defined problems for us:
“At the heart of the discipline of artificial intelligence is the idea that one day we’ll be able to build a machine that’s as smart as a human. Such a system is often referred to as an artificial general intelligence, or AGI, which is a name that distinguishes the concept from the broader field of study. It also makes it clear that true AI possesses intelligence that is both broad and adaptable. To date, we’ve built countless systems that are superhuman at specific tasks, but none that can match a rat when it comes to general brain power.”
What Are the Limits of Machine Learning?
Machine learning is a way for computers to evolve their abilities: to play enough games of chess that they learn to beat the grandmaster. Get an email about Viagra and flag it as spam—the computer has a data point. Fifty people flag it as spam, and it knows that specific email is spam. Hundreds of people flag a thousand different Viagra emails as spam, and now it knows the characteristics of a Viagra email well enough to predict how to handle a new one. At a Google-like scale, it can get pretty good: in 2015, Google said it’s getting .05% false positives on spam, and it’s only gotten better since.
More advanced machine learning is an integral part of self-driving cars, smart home automation, mapping, and targeted advertising. So it makes sense that Google is now extending their massive data set to include data about applied creativity like design.
Is it inevitable that design will become a victim of the constant march of technology, like assembly-line production or in-person dating?
If we think about “design” as a noun, then probably. Machines may soon be able to conceive of chairs, toasters, websites, and buildings that will sell really, really well because computers have been trained to give consumers what they want — though the resulting designs may not win awards for creativity.
Computers are getting better at solving ill-defined problems, but they are still a distance from solving wicked problems: complex systems such as poverty, access to education, climate change, nutrition, and behavior. Design-as-a-verb in these contexts is a different interpretation: the application of a way of thinking about problems, often within the context of government.
Experimental and observational research data can explain many of the reasons people turn to drugs, or what happens when we cut funding for public schools, or why people eat food that’s bad for them. When it comes to data about how to help society, companies and governments often do the opposite, especially if a potential solution runs counter to traditional ideologies or large lobbying groups. And that’s going to really confuse the machines.
Design has evolved from traditional realms like visuals and textiles to new paths such as interactive design and design strategy. Growing societal uncertainty opens up new avenues of “making,” often in a context that doesn’t benefit from machine learning. Designers who work with community technology practices and interaction design are incorporating new ways of approaching problem solving that combine ethnographic research with aesthetics and service design, rather than executing a tangible object like a poster or a chair.
Making a service or product that positively impacts political engagement — like Pioneer Works did with the kiosk that automatically sends a recorded message to the user’s state representative — inherently utilizes originality and an emotional understanding of people and culture.
This type of design needs to bring together a wide range of disparate goals, objectives, and vocabularies. It needs to navigate politics. It needs to be convincing across diverse regions and groups of people. It needs to compromise.
“The designer does not begin with some preconceived idea. Rather, the idea is the result of careful study and observation, and the design a product of that idea.”— Paul Rand
The people relying on complex solutions to wicked problems and a rapidly evolving world need humans to do what we do best — use design as a tool for communication and create from a place of informed empathy. Maybe the machines will get there eventually, but it’s going to take a very long time.