Just as every author needs an editor and every engineer’s code needs QA, every designer’s work can benefit from evaluation. Does it surprise you that I’m saying this? As UIE’s Christine Perfetti once pointed out to me in an interview, Cooper is better known for advocating up-front research, effective process, and skilled designers than for promoting the value of usability testing and other evaluation techniques. It’s true that given a limited budget-which most people have, these days-we think investing in these early-stage activities yields greater value. That said, evaluation is so important that every design project at Cooper has evaluation techniques built in. When, how, and how often we evaluate depends on the nature of the project.Like good programmers, good designers begin a type of design evaluation themselves, constantly assessing the design by throwing scenarios at it and asking their teammates to do the same. When the persona and scenarios are based on good data and applied by skilled designers, this approach identifies the majority of problems within a few minutes after a sketch first appears on the whiteboard. Of course, we all have our blind spots, so this kind of on-the-fly evaluation isn’t enough, especially when you’re several months into a project and getting awfully close to the problem. Another pair of eyes is essential, and many more pairs of eyes are better still.
All of our projects at Cooper include expert reviews. Projects involving interaction design always include scenario walkthroughs. We apply both techniques at multiple stages. Not long after the first framework sketches begin to coalesce or the first design language studies are done, a senior designer examines and critiques each solution based on design principles as well as the goals and skills of the personas. These reviews continue on a regular basis throughout the project, and may involve more than one reviewer. At a minimum, scenario walkthroughs involve product managers and engineers. On any project that involves a complex domain, it’s essential to involve expert users in these walkthroughs, as well. Both expert reviews and scenario walkthroughs are quick and inexpensive. Best of all, because the design team explains their thinking in either case, either technique can catch issues while your sketches are still too vague for even a paper prototype usability test.
Of course, some flavor of usability testing is a good idea if you can do it. However, testing isn’t necessarily the “gold standard” of evaluation that many people believe it is. As a result of a series of comparative usability evaluation (CUE) studies, Rolf Molich has concluded that “There’s no measurable difference in the quality of the results produced by usability tests and expert reviews.” What’s more important, I think, is that Molich and his colleagues also demonstrated that no evaluation technique is perfect-no single evaluator or team of evaluators found every issue, regardless of what technique they used. In other words, although both activities draw upon science, design evaluation is no more a science than design itself.
Does this mean we shouldn’t do evaluation? No! It means that the more essential it is to get the design right, the more evaluation techniques you should apply, and the more often you should apply them. If you have experienced designers on the job and you’ve applied good techniques to arrive at your design, then it’s probably not tragic if you don’t run any usability tests as long as the consequences of design errors are small. If a usability problem in your e-commerce shopping cart might cost millions-and it might just!-then it would be wise to run a test or two. If it’s my loved one lying in a hospital bed being treated by your medication pump, then I hope you’ve tested more than once or twice, and I hope you’ve taken advantage of every evaluation technique available.