Designing or redesigning a product often feels like a risky proposition, especially in today's business climate. Those responsible for defining the product offering and marketing want reliable, measurable data to define success both incrementally and overall.
Hard data helps us make choices about where to spend resources, but placing a product under the microscope every step of the way can also introduce as many opportunities for error as it avoids. By focusing on how a product performs in the lab without broader knowledge of the user's environment and goals, measurement alone may be misleading. To get the most value and meaning out of user feedback it is important to choose the appropriate method for conducting and analyzing user research.
User research can be roughly broken down into two types: usability testing and ethnographic field research. Many people are already familiar with usability testing, and many companies make use of it during development. However, ethnographic field research can yield valuable results for improving products that can't be easily measured by usability testing.
While many usability and research professionals are familiar with these techniques, to those responsible for managing product development, research may seem like time taken out of the development cycle. But, by understanding how different types of feedback from actual users fit into the development process, product managers, developers, and marketers can make better choices about a product and its focus.
Usability testing and ethnographic research defined
Let's look at the major differences between usability testing and ethnographic research and how each relates to the development process.
Usability focuses on measurable characteristics of a user's interaction with a product. Assessing the usability of a product focuses on standardized tests that yield quantifiable data. In usability testing, results often reveal trends in user behavior, pointing toward problem areas as well as successful aspects of the product.
Ethnographic research techniques, on the other hand, focus on the observation of users in a real-world setting. Watching software users try to achieve real goals can be time intensive, but yields valuable qualitative information about the usefulness of a product. The data gained using ethnographic techniques helps in developing solutions to the problem areas diagnosed by usability.
User research and the development process
At the highest level, creating a product means defining what it is that the product offers, developing an expression or manifestation of that product, and delivering it to customers. Incorporating feedback from customers requires placing the product (or a prototype of it) in front of them to gauge their reaction. The following diagram illustrates where user research can be used most effectively to validate design choices and learn how to improve the product.
Beginning a product development cycle with ethnographic research helps generate ideas for the product offering. It also helps product developers understand users' "mental model" of what they want to achieve so that the software can better reflect how people actually work.
Usability testing can be performed during the development cycle (using paper prototypes or other models) or on a finished product. Because the findings are generally measurable and quantitative, usability research is especially useful in comparing specific design elements to choose an effective solution.
Ongoing user research can be done to continue improving a product over the short and long term. Path A shows how results can be used to refine the design to make general tweaks to existing design elements. This would be most beneficial when preparing a "point release" or an interim version of the software meant to generate sales within an existing customer base.
Path B shows how usability testing results can also be used to focus new ethnographic research for more major product revisions. If a product tests poorly, it may indicate more than poor interface mechanics. Doing ethnographic research to determine what needs the software is not meeting will help you find a competitive edge and produce features in your product that users will value in their everyday lives.
Choosing an appropriate method for conducting user research is both a matter of timing and a matter of aligning the goals of the research with the results it will produce. To get value from your efforts, it is important to understand what to test for, and how to use those results.
Using usability methods
Customer feedback gathered from usability testing is most useful when you need to validate or refine the interface mechanisms of a product, or the distinct form and expression of an offering.
With usability testing, specific mechanisms that tell the user where information or features are located can be analyzed to gauge the effectiveness of the product design. Testing can be applied to the user interface (e.g., drop-down menus, navigation), interactions (mechanisms for operating the software, behaviors) or visual design (readability of type, appearance of buttons).
Usability is especially effective at testing:
- Naming: Do section/button labels make sense? Do certain words resonate more than others?
- Organization: Especially for products that deliver information (as opposed to providing a service), is information grouped into the right number of categories? Or not enough? Are items located in the places customers might look for them?
- First-time use or "discoverability": Are common items easy to find for new users? Are instructions clear? Are instructions necessary?
- Effectiveness: Can customers complete specific tasks? Are they making missteps? Where? How often?
Be sure that what you are testing is actually measurable and that the results will be useful. When preparing usability testing, imagine that you have already gathered a set of test results. Do they indicate a solution that is clearly better?
For example, imagine you are developing a word processing application and are planning an interim release to generate sales. You might conduct usability testing in which you ask respondents to format the type in a document to test how well the feature works. This would result in measurable data such as 50% of people formatted type using the toolbar, 40% of people used the key command, and 10% of people could not format type. This might suggest that the two mechanisms for formatting type are effective, but that support should be provided for those who cannot find those mechanisms.
On the other hand, using this type of feedback to evaluate the overall concept or formulate new ideas for the product can be problematic. What does it mean if 80% of people find formatting type satisfactory? Does this mean people like the effect of formatted type, or the mechanism used to format it? How would you refine the product to satisfy customers using this data? Because usability techniques generally call for surveying people about a product abstracted from their actual setting and objectives, the data gained tends to be a litmus test for how well received the product is, but not how well it serves users' real needs.
Using ethnographic techniques
Ethnographic research can point out opportunities to serve user needs that are currently unmet, which can help you keep or create a competitive edge. Observing real customers using an actual product can provide information about the overall product offering and how it can be extended. These observations will not only reveal problem areas to product designers, but will often provide clues about how problems can be addressed.
Things that are especially effective to observe:
- Blind spots: Are features going unused? Why? Are people unaware of the tool, or do they just not need it?
- Toolboxes: What tools do people use frequently? How do people think about them? How are they being used? What frequent or repetitive activities are people performing that might be better served by creating a special tool?
- Software crutches: People in the real world will develop ways to make up for shortcomings in software in order to achieve their goals. Look for places where people leave your product and turn to Post-Its, printouts, or other solutions to get the job done.
- Missteps: Users who perform tasks "incorrectly" are a great source of constructive data. Look for where people go when they make mistakes and why—it may be a good hint at how to better organize the application.
- Audio cues: Many people, especially while being observed, will provide a running soundtrack of what they are thinking while using software. This is invaluable when trying to understand how a person relates to the product. Listen for frustration, mnemonic chants ("F4, shift, then enter. . . "), and indecision ("Do I want to revert to temp.bak?").
- Goals: Operating software is not the end goal for any user. Pay attention to what users are trying to achieve. Is it an object (creating a file)? Or a process (telling Mr. Smith the status of his account)?
Going back to the example of the word processor, during an interview with a reporter working on an article, you might observe her operating the software flawlessly. But a stack of index cards on her desk that she uses to organize themes and facts might indicate that there is a need for a better outlining feature that is as flexible and accessible as those index cards. This also says a lot about how she organizes her thoughts: small snippets of information are rearranged until a cohesive argument appears. By observing user behavior and asking open-ended questions about the choices they make, you will be able to spot behavioral patterns that need tools to support them.
Other things to consider when conducting ethnographic research:
- Cost: The effort and expense required to find users of your software and observe them in their environment is not trivial. Creating a relationship with a set of your customers who are willing to be observed can pay off in the long run.
- Sample size: Although the cost per person is greater, ethnographic field research requires a much smaller sample size than usability testing to get the same level of results. There is no hard and fast rule about picking the right number of subjects, but often patterns start to emerge after about 10 interviews; conducting more than 40 interviews will generally result in redundant findings.
- Observing unobtrusively: while it is important that you focus on observing your subjects, it is not necessary to pretend that you're not actually there. During an interview, make sure your subject is comfortable: if they want to converse, don't shut them down in an effort to act "objective." At the same time, keep a grain of salt handy for those times when someone is performing for you. Be ready with clarifying questions that steer people toward specifics.
Take the time to choose and plan user-testing techniques. Match the appropriate technique to your development cycle and needs—your product will benefit, and you'll avoid wasting time and resources. Simply putting a product "to the test" in a lab to see whether it passes or fails may provide a lot of data, but not necessarily a lot of value. Use ethnographic techniques to gain clues about what users need and what they expect while using usability testing to rate the effectiveness of specific tools, interface elements, and design choices.