Inside Goal-Directed Design: A Conversation With Alan Cooper (Part 2)

We continue our conversation with Alan Cooper at Sue and Alan’s warm and welcoming ranch in Petaluma, CA, which, in addition to themselves, is home to sheep and chickens, a cat named Monkey, and a farmer who works the land.

Part 2 brings us up to present-day, and discussions around the applications and fundamentals of Goal-Directed Design that support its success at Cooper and beyond.

From Theory to Practice­­

CK: Okay, so having established the foundation of Goal-Directed Design in Part 1 of our conversation, let’s fast forward now to after you started your company, Cooper Software. How has GDD figured in?

AC: Basically you find one person, understand their vision and their final desired end state, and then make them ecstatically happy about reaching their end state. That is the essence of Goal-Directed Design. And what you need are two things: 1) Find (or synthesize) the right person and 2) Design for that person. At a place like Apple, Steve Jobs was already that right person, and they needed look no further. For us at Cooper, a team of trained designers needs to synthesize the representative user, called a persona.

CK: Can you say a little more about Personas and their context in the process?

AC: Personas are the end result of going out in the field and researching the users and patterns that indicate what their desired end state is. Then we create the archetypical persona and walk that archetype through a scenario, like a test flight simulator using a proposed solution. And when your persona’s needs are satisfied in multiple scenarios, you know you are on the right track.

Designers at Cooper can go into healthcare, tech, or jet engine design, wrap their head around it and articulate the representative user’s desired end state, and from there identify the right problem to solve. Then synthesizing form just becomes the work, not magic at that point.

CK: I’ve heard you talk about pair design being part of the success of this goal-directed process. Can you touch on that a bit?

AC: Yes, at Cooper Goal-Directed Design is enhanced by our practice of pair design. Rather than wrestling with a problem alone, externalizing the problem with a partner usually yields the most success. And building on that, it turns out pairs work most effectively in particular combinations of skills. We found that designers tend to naturally fall into two camps, and we ended up calling these designers Generators and Synthesizers. You could think of the Generator as the driver and the Synthesizer as the navigator. You need both of them to get where you’re going, and it’s not that the Generator can’t navigate or the Synthesizer can’t drive, it’s just that if you try to navigate while you’re driving you might crash into something, and you’ll go slower, and you might miss turns. And if you try to drive while you’re navigating, you’re going to end up not taking the most optimal route, and you might forget to stop for gas when you should, and you might end up backtracking.

In the early days when we were inventing these roles, we were a little more prescriptive about them, with the Generators always at the whiteboard and the Synthesizers taking notes, but as this became successful we realized we didn’t have to be so doctrinaire about it, they could switch roles and become much more fluid and even more effective. But in principle, the Generator is usually saying, “we could do this! and we could do that!” – coming up with ideas, and Synthesizers have the analytical role to question each idea and build and shape it.

CK: For a lot of designers the places where they work are not so receptive to pair design because they don’t think it’s efficient.

AC: That’s true. And what those places are missing is that in this post-industrial age, efficiency is less useful than effectiveness. Apple, for example, is ridiculously inefficient. They spend money to work and re-work a problem and other companies would say they are wasting it. But Apple knows that saving money doesn’t lead to success, making their customers ecstatically happy leads to success. And of course success leads to money. But getting internal buy-in and support is certainly an issue for many. At Cooper we offer training in Design Leadership that helps with this.

CK: I have a feeling questions around that post-industrial business model could spark a whole conversation in itself. As we wind up here, I’m wondering if you know of a good case study that demonstrates Goal-Directed Design in action?

AC: I do! SketchUp is a great example. It’s an architectural sketching tool, and it’s complicated and powerful, and it has a learning curve, and it’s definitely not for everyone, but I love the program because the design is brilliant – at the macro level and the micro level. I used SketchUp to design the new chicken coop here at Monkey Ranch.

At the macro level they understood exactly the problem they were trying to solve. Other modeling programs like AutoCAD are painfully counter-intuitive, with a learning cliff, much less a learning curve. You have to be a professional to want to bother to use one of those tools — there is nothing coherent, you just have to memorize about 80 tools.

With SketchUp, I know from their website and video and blogs that they used Goal-Directed Design. Their vision was not to displace AutoCAD, instead they had in mind this idea of an architect who has just presented the initial design of a building to the client, and the client says, “I love it! Could you make this stairway a little wider?” And in the AutoCAD world, it goes like this: “yes, we’ll have the drawings back to you in three days.” But in the SketchUp world, the architect says, “sure,” and clicks the extrusion tool on the side face of the stairway, stretches it out another foot, the staircase is wider, and everything in the model instantly adjusts to fit. That was their persona, their scenario, and that was the goal direction. So that’s from a macro point view – they understood that they weren’t trying to create an architectural drafting program that competed with set piece giant architectural drawings.

Also, at the micro level, they designed their controlling interface as a coherent system. Throughout the interface everything is consistent, all of the interactions have the same fundamental grammar. If you understand how one tool work works you understand all of the tools. And they anticipate the exacting needs of architectural planners, understanding just when you need to type in numbers or simply move the lines. This profound understanding of how you can build an interface permeates everything they do, and that’s a great example of successful Goal-Directed Design.

CK: That’s an inspiring example.

AC: It is. In the decades since Cooper conceived of Goal-Directed Design, the benefits of this practice have really been lasting and measurable. Project teams are able to start out with a shared understanding of goals and achieve early consensus on the design problem. And because designers develop empathy for the people who will use the product, they are able to focus on the right priorities. In the end, training and support and development costs are significantly reduced, and consumers experience ease and delight in the products.

CK: I think that’s the perfect note to end on. Thanks, Alan, for kicking off Cooper’s Masters In Conversation series, it’s been great to talk with you!

Interaction14 – Is it Science, Art or something else?

While Friday’s talks seem to be quite level-headed compared Thursday’s design extravaganza, they weren’t any less provocative. Take a look at some of Friday‘s highlights (or sneak ahead to Saturday)

The De-Intellectualization of Design

Dan Rosenberg

Sketchnote by @ChrisNoessel

The De-Intellectualization of Design Big Idea:

Daniel Rosenberg, one of the old guard of Human-Computer Interaction, bemoaned the loss of a computer-science heavy approach to interaction design. He then shared his three-part antidote: Industry certification, employing Chief Design Officers, and better design education (read: computer and cognitive-science based). Guess which one of these was the audience’s “favorite”?

Hint:

Full description of The De-Intellectualization of Design here.

An excellent counterpoint to Dan’s observation was Irene Au’s early-morning mindfulness talk.

Read More

Interaction14 – Food, Comics, and the UI of Nature

Interaction14 is off to a blazing start, and man if it doesn’t sound like a kaleidoscope of designers, thought-leaders, and crazy beautiful ideas. There’s everything from interactive skateboard ramps to talks about principles of user experience design learned from cats.

Exactly what kind of “conference” is this?

This year Cooper sent over a troop of people for inspiration, elucidation and to capture some of the creative spark that only happens when you put hundreds of brilliant people in a big room for 4 days. In between workshops, talks, and happy hours, they’ve been slapping together some pretty stunning sketchnotes for us local folks. Here are notes from 4 of the talks that went down on Thursday. See sketchnotes from Friday and Saturday too!

Read More

Man’s Best App

How do you design an engaging and educational application that prepares a user with short-term memory loss for a lifestyle change?

For the November UX Boot Camp, designers, developers, and product managers from around the world teamed up to answer that very challenge for Canine Companions for Independence, the largest non-profit provider of service dogs.

Led by senior designers from Cooper, UX Boot Camp participants got their hands dirty learning new UX design techniques, collaborating with new teams, and working closely with stakeholders from Canine Companions.

From kickoff to design delivery, UX Boot Camp participants took a hands-on role in the generation, exploration, and synthesis of five distinct and fully-developed design concepts.

Read More

New Peers, Practices, and Perspectives

Takeaways from Cooper U in Philadelphia

A guest post by Cooper U alumni, Hanna Kang-Brown

As a career changer and the first UX Designer to be hired at my company, there’s a lot of self-learning I do on the job. Reading books and blogs have been essential to developing my UX process, but when I had the opportunity to attend Cooper U’s Interaction Design Training in Philadelphia this past December, I jumped at the chance. I wanted a week of hands-on training, and the opportunity to learn a thorough interaction design process with a group of other professionals. Some highlights from the week and my biggest takeaways are below.

My Biggest Takeaways

Clarifying Process
I was already familiar with the interaction design process, but the course helped deepen my understanding of it through hands on activities. I discovered ways in which I had cut corners in my design process and how I could have a better end product if I spent more time initially considering business stakeholder goals, personas and sketching out scenarios.

Speaking of Sketching
I’ve always been a reluctant sketcher because I never thought I was very good at it. We did a lot of sketching, from user profiles to storyboards and wireframes, and it helped me gain more confidence and a better appreciation for its usefulness as a lightweight prototyping method.

Read More

Designing the Future: Cooper in Berlin

Most software projects are built around the question “What are we going to do next?” But occasionally we’re asked to think farther out. Projects focused on the 5-10 year range are more about “Where are we headed?” and “What’s going to inspire people?” These are different questions to ask, and answering them changes the usual process of interaction design.

I’ve been thinking about these things for a while, and while at the MobX conference in Berlin I conducted a workshop where a group of 16 designers and strategists took a look at how you answer these questions.

So…how do you do it? The core of the matter is to understand what’s going to be different in the future you’re designing for.

These kinds of projects are less about “What’s next?” and more about “Where are we headed?” and “What’s going to inspire people?”

Read More

Augmented Experience


Photo via Reuters / Carlo Allegri

Let’s be honest: Google Glass looks pretty silly. Its appearance is out of time, futuristic, and obnoxiously so. And it’s out of place in daily life—a strange accessory with mysterious purpose, as if someone were to walk around all day with a skateboard on a leash.

But Glass also points to an intriguing future, one in which the line between using a digital device and simply going about daily life is removed. Whereas traditional spectacles have a corrective purpose to see reality more clearly, Glass offers a new category of lenses that promise to augment the reality we see. It opens a vast new frontier for the practice of interaction design that, like the Wild West, is full of lawlessness and danger and promise. And it is the UX community that will shape this landscape; we will determine it’s character, and the impact it will have on people’s lives.

A key question all this raises is: what “reality” is Glass augmenting? At the moment, being a Google product, the augmentation is designed to primarily target the urban economic and social spheres. Looking down the street through Glass, you may see restaurant store-fronts adorned with floating metadata describing the cuisine type and star-ratings by previous diners. Turning your head, an indicator points in the direction of the location of your next calendar appointment. Peering at a product on the shelf, prices for similar products are displayed for easy comparison. You’ll always know where you are, where you need to be, and what you’re looking at. The reality that Glass augments is a realm of people, objects, places of business, and locations. In other words, what can be expressed in a database and efficiently searched.

Toward a better future

At this point in the conversation, the story usually veers into the realm of exasperation and despair. Google Glass represents the death of spontaneity! It will systematize and computerize our lives! Organic experience will be lost! (And, most insidious of all) Google will monitor and monetize every saccade of our eyeball, every step we take!


“Big brother” from the film adaptation of Orwell’s 1984

Given the penchant for technologists to base business models on advertising and “big data” about their customers, it is not surprising that Google Glass can be seen as a kind of portable panopticon. But I think the truth that this device foreshadows is something potentially more benign, and almost certainly beneficial.

The dystopian narrative that depicts a society dominated by machines and ubiquitous surveillance is common, expressed through fiction, film, and even journalism, which tends to draw on the same sinister rhetoric. George Orwell’s 1984 describes the homogenization and suppression of culture through rules, systems, and constant surveillance. In a more recent popular expression, Pixar’s Wall-E imagines a future humanity composed of zombie-like innocents, shuttled along by automated chairs, staring feebly into digital screens, mobilized—and controlled—by machines. The plausibility of these futures is made even more vivid by the unfolding story of the depth of NSA surveillance.

To paraphrase a recent piece by Don Norman, it all depends on how we design and develop augmented reality applications. If we manage to create useful and utility-producing applications with wearable technologies like Google Glass, people will benefit. This seems at first more like a truism than truth. But the obviousness of the statement belies the underlying premise, which is that Google Glass and its future iterations are simply a canvas on which we can write the future of our “augmented” everyday experience. So let’s not leave it all up to Google, shall we?

Big ideas

Ideas for the positive future of augmented reality abound. Augmedix, for example, is a small company with a vision of Google Glass re-shaping the doctor-patient relationship. Increasingly, the burden of the new and fraught world of digital medical records is damaging this interaction. Doctors stare at screens instead of faces, they spend as much time clicking checkboxes and radio buttons as they do examining the bodies and listening to the voices of the people under their care. Augmented reality could turn this scenario on its head by allowing doctors to look at and converse with their patient while simultaneously accessing and transmitting important information through Glass. This will almost certainly lead to fewer errors, an increase in trust, and ultimately better health outcomes.


A doctor wears Glass with the Augmedix app.

Or consider William Gibson’s Spook Country, a novel in which a central character creates “locative art,” what you might call augmented reality sculpture. Imagine looking at a city fountain with your augmentation goggles and seeing a bloom of light and color where others see only water. That we could transform our physical landscape in a way that enhances its beauty—rather than simply enhancing its economic potential—is a stunning notion. Unlike 3D movie glasses or straight-up “virtual reality,” the idea of a physical/virtual mashup offers us a chance to experiment and play in realms previously only available to the world of screens and displays, without losing the notion of being present in a place, something virtual reality cannot avoid. We remain in the real world.

The design of augmented reality

The first attempts to harness the power of Glass-like technology will be “ports,” shoe-horning old functionality into a new form factor. Text and email messages will appear, caller ID will notify you of a phone call, the front-facing camera will take a picture or video on command. But none of these use cases address new goals. They simply make achieving old goals incrementally faster or more convenient. I don’t have to lift my phone and look at the screen to see a text message or know who’s calling. I don’t have to lift my camera and press a button to take a picture. The difference in my experience enabled by porting functionality from my phone to Glass is a difference of degree, not a difference in kind.

More interesting will be the forays into using augmented reality tech to solve previously unmet goals. Augmedix is a good example, because it bucks a trend toward less personal medicine and solves both a doctor and a patient goal. Locative art is similarly interesting, because it provides an entirely new artistic medium and way of experiencing that art. Mapping and orientation in a visually augmented world represents another fundamental change, because it bridges the gap between the abstract 2D map and the immediately actionable—a translation that currently happens in the human brain.

Go get ‘em

Augmented reality is in its infancy. Google Glass still faces some serious challenges, especially on the hardware front—miniaturizing the device and making it less obtrusive is necessary to make it less like pulling a skateboard on a leash everywhere you go. But the frontier for experience design this device opens up is huge, and doesn’t have to remain within the boundaries Google sets. Part of our challenge and calling as a UX community is to think deeply about what an augmented experience feels like, and how it shapes people’s lives. As you would with any user experience, let unmet user goals guide your design.

Your role in this revolution is just beginning.

Inside the IxDA 2014 Student Design Challenge

Photo by Jeremy Yuille

As co-chair of the 2014 IxDA Student Design Challenge with Dianna Miller, I recently had the pleasure of announcing this year’s theme, “Information for Life,”sponsored by the Bill and Melinda Gates Foundation.

Now in its fifth year, the IxDA Student Design Challenge (SDC) will run during the Interaction14 conference in Amsterdam, February 5-8, 2014. The competition brings together exceptional undergraduate and graduate students for both critical thinking and hands-on experiences over the course of the conference. Here, students have the opportunity to present their work in a way that shows, rather than tells, and it’s also a terrific venue for students to connect with colleagues, potential employers, funders, or new networks.

And I speak from experience — this competition holds a special place in my heart as I was a participant myself just a few years ago, in 2011.

Read More

Engaging Millennials – the UX Boot Camp: Wikipedia

As mobile devices become widely adopted, organizations are increasingly focused on designing engaging experiences across multiple platforms. At Cooper’s UX Boot Camp with Wikimedia, the non-profit took this a step further, challenging the class of designers to create a solution that facilitated content input and encouraged a new group of editors, specifically Millennial women, to contribute through mobile devices.

Read More

1 2 3 15