Inside Goal-Directed Design: A Conversation With Alan Cooper (Part 2)

We continue our conversation with Alan Cooper at Sue and Alan’s warm and welcoming ranch in Petaluma, CA, which, in addition to themselves, is home to sheep and chickens, a cat named Monkey, and a farmer who works the land.

Part 2 brings us up to present-day, and discussions around the applications and fundamentals of Goal-Directed Design that support its success at Cooper and beyond.

From Theory to Practice­­

CK: Okay, so having established the foundation of Goal-Directed Design in Part 1 of our conversation, let’s fast forward now to after you started your company, Cooper Software. How has GDD figured in?

AC: Basically you find one person, understand their vision and their final desired end state, and then make them ecstatically happy about reaching their end state. That is the essence of Goal-Directed Design. And what you need are two things: 1) Find (or synthesize) the right person and 2) Design for that person. At a place like Apple, Steve Jobs was already that right person, and they needed look no further. For us at Cooper, a team of trained designers needs to synthesize the representative user, called a persona.

CK: Can you say a little more about Personas and their context in the process?

AC: Personas are the end result of going out in the field and researching the users and patterns that indicate what their desired end state is. Then we create the archetypical persona and walk that archetype through a scenario, like a test flight simulator using a proposed solution. And when your persona’s needs are satisfied in multiple scenarios, you know you are on the right track.

Designers at Cooper can go into healthcare, tech, or jet engine design, wrap their head around it and articulate the representative user’s desired end state, and from there identify the right problem to solve. Then synthesizing form just becomes the work, not magic at that point.

CK: I’ve heard you talk about pair design being part of the success of this goal-directed process. Can you touch on that a bit?

AC: Yes, at Cooper Goal-Directed Design is enhanced by our practice of pair design. Rather than wrestling with a problem alone, externalizing the problem with a partner usually yields the most success. And building on that, it turns out pairs work most effectively in particular combinations of skills. We found that designers tend to naturally fall into two camps, and we ended up calling these designers Generators and Synthesizers. You could think of the Generator as the driver and the Synthesizer as the navigator. You need both of them to get where you’re going, and it’s not that the Generator can’t navigate or the Synthesizer can’t drive, it’s just that if you try to navigate while you’re driving you might crash into something, and you’ll go slower, and you might miss turns. And if you try to drive while you’re navigating, you’re going to end up not taking the most optimal route, and you might forget to stop for gas when you should, and you might end up backtracking.

In the early days when we were inventing these roles, we were a little more prescriptive about them, with the Generators always at the whiteboard and the Synthesizers taking notes, but as this became successful we realized we didn’t have to be so doctrinaire about it, they could switch roles and become much more fluid and even more effective. But in principle, the Generator is usually saying, “we could do this! and we could do that!” – coming up with ideas, and Synthesizers have the analytical role to question each idea and build and shape it.

CK: For a lot of designers the places where they work are not so receptive to pair design because they don’t think it’s efficient.

AC: That’s true. And what those places are missing is that in this post-industrial age, efficiency is less useful than effectiveness. Apple, for example, is ridiculously inefficient. They spend money to work and re-work a problem and other companies would say they are wasting it. But Apple knows that saving money doesn’t lead to success, making their customers ecstatically happy leads to success. And of course success leads to money. But getting internal buy-in and support is certainly an issue for many. At Cooper we offer training in Design Leadership that helps with this.

CK: I have a feeling questions around that post-industrial business model could spark a whole conversation in itself. As we wind up here, I’m wondering if you know of a good case study that demonstrates Goal-Directed Design in action?

AC: I do! SketchUp is a great example. It’s an architectural sketching tool, and it’s complicated and powerful, and it has a learning curve, and it’s definitely not for everyone, but I love the program because the design is brilliant – at the macro level and the micro level. I used SketchUp to design the new chicken coop here at Monkey Ranch.

At the macro level they understood exactly the problem they were trying to solve. Other modeling programs like AutoCAD are painfully counter-intuitive, with a learning cliff, much less a learning curve. You have to be a professional to want to bother to use one of those tools — there is nothing coherent, you just have to memorize about 80 tools.

With SketchUp, I know from their website and video and blogs that they used Goal-Directed Design. Their vision was not to displace AutoCAD, instead they had in mind this idea of an architect who has just presented the initial design of a building to the client, and the client says, “I love it! Could you make this stairway a little wider?” And in the AutoCAD world, it goes like this: “yes, we’ll have the drawings back to you in three days.” But in the SketchUp world, the architect says, “sure,” and clicks the extrusion tool on the side face of the stairway, stretches it out another foot, the staircase is wider, and everything in the model instantly adjusts to fit. That was their persona, their scenario, and that was the goal direction. So that’s from a macro point view – they understood that they weren’t trying to create an architectural drafting program that competed with set piece giant architectural drawings.

Also, at the micro level, they designed their controlling interface as a coherent system. Throughout the interface everything is consistent, all of the interactions have the same fundamental grammar. If you understand how one tool work works you understand all of the tools. And they anticipate the exacting needs of architectural planners, understanding just when you need to type in numbers or simply move the lines. This profound understanding of how you can build an interface permeates everything they do, and that’s a great example of successful Goal-Directed Design.

CK: That’s an inspiring example.

AC: It is. In the decades since Cooper conceived of Goal-Directed Design, the benefits of this practice have really been lasting and measurable. Project teams are able to start out with a shared understanding of goals and achieve early consensus on the design problem. And because designers develop empathy for the people who will use the product, they are able to focus on the right priorities. In the end, training and support and development costs are significantly reduced, and consumers experience ease and delight in the products.

CK: I think that’s the perfect note to end on. Thanks, Alan, for kicking off Cooper’s Masters In Conversation series, it’s been great to talk with you!

Inside Goal-Directed Design: A Two-Part Conversation With Alan Cooper

Go behind the scenes in this two-part Masters In Conversation series with Alan Cooper, exploring the origins and applications of Goal-Directed Design (GDD). In Part 1 we rewind to the early 1970s when Alan was just starting out and the climate of programming and design was changing rapidly, forging the insights that led to the techniques of GDD. Part 2 brings us up to date with GDD as Cooper designers and teachers apply it today.

Part 1: In the Beginning…

Read More

Austin in SXSW – The Digital Master (1 of 3)

It used to be the case that we understood computation as a representation of the real world around us. It was used to model effectiveness of bombs, cities, or patterns of life. But that has flipped. Now the physical world around us is an instantiation of a digital source. Our source used to be an analog, in the case of photography, a negative. The source is no longer analog atoms, but rather a digital master. This is the first of a three part series. Follow the rest of the conversation in part 2 and part 3

Austin, March 11, 2:50pm: You’re staring at your phone, desperately trying to figure out the most appropriate, break-through, next-level place you could possibly go. But you’re also moving, your feet propel you forward guided by the over flowing list of lives you could be living at 3:00pm today. Welcome to the crowd of SXSW’13, a hoard of nerds, some of whom you’ve highlighted as potential friendships, contacts, and maybe something more. Jumping to your other compass, the twitter-sphere, you search for what’s good in the last 2 minutes. Expo G? You’ve got a good 10 minute walk. It starts to rain, and you see a swarm of folks donning red ponchos with a line emerging behind them. Just in time, you happily wear a url in exchange for a dry walk to the next venue. Despite bumping into other tilted head walkers, you find yourself in a massive conference room, ready to be inspired, snap an instagram, and grab some quotable references for your tumblr later on. Halfway through the talk, it hits you: ‘what’s next?’ You pull out your shiny glass master and realize 4:00pm promises 13 potential futures. The notion gives you pause. Imagine, what would SXSW be without the net? No digital schedule, website swag, no live tweeting, no ambient cloud of intent. Just a room with a bunch of people talking. For better or for worse, our reality has flipped, what was once a world of physical things organized by people, is now a world of digital things augmented by people. We look down for orientation, and up for verification. I’d like to share with you how SXSW taught me to stop worrying and learn to love the new master.

The digital master of the built environment

Making plastic junk is now a digital pursuit. One of the first unveilings at SXSW was a consumer level 3D scanner. A couple of years ago the makerbot was released with a promise to disrupt how real things are made. The cycle is now complete with the ability to scan an object into a digital mesh. The mesh can then be modified and printed out to a new plastic object. This is consumer level! For the price of a PC in 93, you can purchase a 3D scanner and printer.

The demo object (scanned and printed) was a garden gnome, once of many crapjects waiting to happen.

Read More

Strategies for early-stage design: Observations of a design guinea pig

Where do you start when you’re approaching a complex software design problem? If you work on a large development team, you know that software engineers and UX designers will often approach the same design problem from radically different perspectives. The term “software design” itself can mean very different things to software architects, system programmers, and user experience designers. Software engineers typically focus on the architectural patterns and programmatic algorithms required to get the system working, while UX designers often start from the goals and needs of the users.

In the spring of 2009, I participated in a research study that looked at the ways in which professional software designers approach complex design problems. The research study, sponsored by the National Science Foundation, was led by researchers from the Department of Infomatics at the University of California, Irvine. The researchers traveled to multiple software companies, trying to better understand how professional software designers collaborate on complex problems. At each company, they asked to observe two software designers in a design session. At my company, AmberPoint, where I worked at the time as an interaction designer, I was paired with my colleague Ania Dilmaghani, the programming lead of the UI development team. In a conference room with a whiteboard, the researchers set up a video camera, and handed us a design prompt describing the requirements for a traffic control simulation system for undergraduate civil engineering students. We were allotted two hours to design both the user interaction and the code structure for the system.

Jim-and-Ania-at-the-whiteboard.jpgJim Dibble and Ania Dilmaghani at the whiteboard in their research design session

Read More

Transforming healthcare infrastructure

(This article was published in the November/December 2010 issue of interactions magazine.)

It seems likely that we find ourselves at an inflection point in the evolution of healthcare. While the situation has certainly been brought to a boil by recent American political events, the opportunities for change fit into a much larger context; they have the potential to truly transform the delivery of healthcare globally.

Unlike some, I don’t believe our current healthcare system is totally broken. I’ve conducted design research in quite a number of clinical settings and have consulted for businesses representing many different aspects of the healthcare industry, including provider networks, medical-device manufacturers, and even health insurance companies. I’ve seen magic worked on regular basis, and from a historical (and global) perspective, the standard of care in the developed world is astoundingly high. I am in awe of the abilities of doctors, nurses, techs, and other clinicians to consistently function at a very high level despite the fact they’re forced to work with archaic infrastructure in less than ideal environments. (As for the insurance companies, perhaps the best thing to say is that they function to make money but could be dramatically more successful as businesses if they changed their approach to things.)

It is at this level—the level of infrastructure—where these big opportunities for transformation exist. It isn’t that we don’t know what kinds of patient and clinician behaviors and medical interventions result in healthy outcomes; it’s that at a systemic level, we’re not doing a good job facilitating these behaviors and driving appropriate interventions. The right changes here will provide a conduit for evolutionary change to cascade throughout the system to achieve dramatic improvements in the quality and cost of healthcare. Which isn’t to say that it also isn’t incredibly important for medical knowledge to continue to evolve; it’s just that we already know enough to dramatically drive up quality and drive down costs.

Many of the opportunities to improve our healthcare system can fit into three big categories: proactively engaging individuals to take better care of themselves; providing better interventional care beyond the walls of the hospital; and improving care delivery inside hospitals through standardization and better collaboration between clinicians, patients, and families. All three of these strategies require new infrastructure and perhaps a shift in the definition, role, and activities that characterize the hospital.

The first two ideas are mostly about what happens outside the hospital. These are things that architects wouldn’t traditionally worry about when designing hospitals. But that kind of thinking has gotten us into our current predicament, where the current built “environment” for providing healthcare is sometimes an impediment to necessary change. If we step back and define a hospital as the nexus for healthcare in a community, we have a platform on which we can imagine the ideal infrastructure for keeping people healthy as possible in a cost-effective way.

In the May+June 2010 issue of interactions, Hugh Dubberly suggested designers ought to help reframe what healthcare is and how it is delivered, as well as to reframe what it means for design to help. I couldn’t agree more, and in this spirit, propose reconsidering what healthcare infrastructure is necessary to better care for people, how design should address this new notion of infrastructure, and what this all means for the institution of the hospital.

Read More

Creating immersive experiences with diegetic interfaces

I like to think of Interaction Design in its purest form as being about shaping the perception of an environment of any kind. Yes, today the discipline is so closely tied to visual displays and software that it almost seems to revolve around that medium alone, but that’s only because as of now, that’s pretty much the only part of our environment over which we have complete control.

The one field that has come closest to overcoming this limitation is the video game industry whose 3D games are the most vivid and complete alternate realities technology has been able to achieve. Game designers have control over more aspects of an environment, albeit a virtual one, than anyone else.

Lately I’ve been thinking a lot about this idea that interfaces can be more closely integrated with the environment in which they operate. I’d like to share some of what I’ve learned from the universe of video games and how it might be applicable to other kinds of designed experiences.

In Designing for the Digital Age, Kim Goodwin criticizes the term “Experience design” as being too presumptuous because we don’t really have the power to determine exactly what kind of experience each person with their own beliefs and perceptions has. Even when we work across an entire event from start (e.g. booking a flight) to finish (arriving at the door), there are still countless factors outside our control that can significantly impact how a person will experience it.

Video game designers on the other hand can orchestrate a precise scenario since almost every detail in their virtual world is for them to determine. They can arrange exactly what kind of person sits next to you on a flight no matter who you are or how many times you take that flight.

That isn’t to say that videogames don’t have their limitations. Of course, it isn’t completely true that game designers can determine who sits next to you. They can only determine who your avatar sits next to. The most significant weakness of videogames is the inability to truly inhabit a designed environment or narrative. As much control as we may have over a virtual world, as long as we are confined to experiencing it through television screens and speakers, it won’t be anywhere near comparable to our real world.

Fortunately, there’s a growing effort to address this lack of immersion.

A key area of the problem lies in how we’re presented and interact with complex information diegetically, that is, interfaces that actually exist within the game world itself.

The 4 spaces in which information is presented in a virtual environment

Before continuing, it helps to be familiar with some basic concepts and terminology around diegesis in computer graphics, the different spaces of representation between the actual player and their avatar. The diagram above illustrates the four main types of information representation in games.

duke-nukem-3D.png

Non-diegetic representations remain the most common type of interface in games. In first person shooters, arguably the most immersive type of game since we usually see the scenery through our avatar’s view, the head-up display has remained an expected element since Wolfenstein 3D first created the genre. Read More

NYC as an interface

New York City
Photo by Delcio G.P.Filho.

The big apple.

Many say it’s the greatest city in the world. Whether or not you agree, there’s no denying it’s an incredibly dense place with an overwhelming amount of people and things to do. Not only are there over 40 million tourists annually, jostling to see the sights and get a taste of the cultural capital but there are also over 8 million people living here ? struggling to manage the tasks of daily living amongst all the tourists. That’s a lot of people with very different goals. How do they all figure it out?


(For those of you not in New York, you might want to consider pressing play for some mood music.)

The usability of cities

I’ve been on the road for the past few weeks and am struck by how some cities are easier to use than others. Since I’m in the business of interfaces I’ve been thinking about it in those terms. Just like software, smaller cities with few features are generally (but not always) fairly easy to use. Once you have a large, complex city with many features ? like NYC ? it gets much more challenging to maintain that ease of use.

New York City is an incredibly powerful interface with multiple entry points and endless features. One might say it has feature bloat. It overloads the senses and it’s not always easy to navigate and understand, yet people learn to use it effectively and often grow to love it.

I love New York

In that way it’s like Adobe Photoshop – optimized for expert users, perfect for their needs once they have taken the time to learn how it works, but very intimidating to novice users. Over 40 million of tourists enter the city each year and have to navigate the New York City ‘interface.’ How do they figure it out? Read More

The Birds Nest & the television experience

beijing_ceremony.jpg

Amazement operated on many levels during the Opening Ceremonies of the Beijing Olympics. During each performance, my mind struggled to process what I was seeing. What is this? How in the world did they pull this off? Where does an idea like this even come from?

TV: These small boxes will now take the form of a keyboard, and the keyboard will sprout a peach blossom.

Doug: … Huh.

TV: Now the small boxes, which have made precise, machine-like movements for the last ten minutes, will reveal that humans have been operating them the whole time.
Doug: … Wait, what? … How …
TV: Now a globe will rise, and dozens of people will fly around it in precise circles.
Doug’s brain: [explodes]

In a Wahington Post editorial, Roger K. Lewis recently wrote that NBC didn’t once mention the architects of the venue, Beijing National Stadium. Hmm. That’s funny. I didn’t mention them during the telecast either, but that’s because my brain had been reduced to a pre-verbal state.

Read More

Learning from How Buildings Learn

The BBC miniseries based on Steward Brand’s How Buildings Learn became available on the Internet a few days ago. It’s chock-full of provocative stuff, and lays out compelling arguments about how structures succeed or fail in satisfying the needs and goals of people. (Let’s hear it for design on TV! First Mad Men, now HBL. It’s a televisual golden age!)

ferry_building.jpg
The Ferry Building

As I watched the opening episode, I thought of the quintessential local example of a learning building: The Ferry Building in downtown San Francisco. Built in 1898, it served as a ferry terminal for points around the Bay; as San Francisco changed and bridges eased the traffic burden, it gradually fell into disrepair. In 2004, it re-opened as gourmet food court, serving the prosperous downtown lunch crowd. San Francisco changed, and the Ferry Building “learned” to address a new set of needs. Beautiful.

Is architecture really a good analog for IxD?

Aside from all of the fascinating examples of the ways in which our built environment responds (or doesn’t respond) to change, what the miniseries reveals to me more than anything is the limitation of using architecture and construction as models for software design and development. Architecture serves as a helpful stand-in when you’re talking about the macro stuff — the planning process, the rough apportionment of the screen “real estate,” and discussions around extensibility or repurposing — e.g., Is this thing the first piece of the big structure, or is it the temporary thing that we live in while the big structure is built?

But when you’re talking about the way people experience things in a digital environment, architecture is a limited analog. Software is made up of subtle, nuanced interactions and ever-evolving technical capabilities. Interacting with software is conversation between two active participants; it’s fast-paced and packed with immediate possibilities. For example, changing context in software seems more akin to a change in facial expression than, say, a movement to a different room. (It should, anyway). The ever-evolving technical capabilities have created a world in which we’re all often experiencing some particular digital interaction for the first time; in fact, if someone wrote a book about how software is experienced, it could be called something like, “How Software Teaches Us How to Use It.”

Solutions begat problems, problems beget solutions

Of course, there’s another side of Brand’s perspective that’s relevant to our work: Most design projects (at Cooper, anyway) begin with what is presented as a straightforward task: Design a solution for the problem the clients have identified. Architects probably experience a transformation similar to ours, because the real problem is often quite different than what the client has articulated. Brand’s perspective is interesting to consider here, because our solution often simply modifies (or modulates) the problem — makes it smaller, hopefully — but still: Will our solution be able to handle the need to evolve to further reduce the problem? Of course, if anything could learn and teach at the same time, it’s software. But software that can learn … Hmm. Something sounds fishy about that. Remember that part in Terminator where they’re talking about how the computers took over?

[Start with Episode 1 of How Buildings Learn at Google Video, and thanks to smashingtelly for the tip] Read More

The next step for community design

Community design centers are non-profit organizations that provide high quality design to underfunded and underserved areas of a community. They’re usually established as extensions of colleges and universities, and they’re intended to positively impact the surrounding community though design — usually through the physical build.

Back when I was pursuing my degree at the University of Cincinnati’s college of Design, Art, Architecture and Planning, I worked for one, with the intention of helping to revitalize one of the more depressed parts of Cincinnati. The focus was the design of a farmers market, an initiative that included contributions from Architecture, Planning, Industrial Design, and my own discipline of study, Graphic Design. The end result of our work is a vibrant, exciting environment, and this experience got me thinking about ways in which my current discipline could take part. Read More