We’re designing an Alexa Skill with our friends at Carbon Five to help agile teams stay on track during daily standup meetings. From scriptwriting to prototyping and testing, we’re learning a lot about designing for voice UIs. Here’s what we learned about patterns.
Design patterns are shortcuts to existing design solutions. Designers rely on them to help us explore and evaluate our options, and to make decisions. Patterns also create consistency and predictability for users.
We’re trying to capture some of these patterns as we work on voice UI design for young platforms like Alexa. We identified five patterns and what they’re best suited for here:
A la carte menu
In this pattern, all options are presented up front to help the user decide what to do. This pattern resembles a classic dial phone menu: “press 1 for… press 2 for…” This pattern is ideal when you want to provide an overview of the options at a given moment. This pattern is already popular in the help menus built into emerging skills, where asking Alexa for help returns a list of commands. This reduces drawbacks like having to sit and wait for Alexa to list options, which is especially user-unfriendly for longer lists.
Remember: users can’t interrupt Alexa. The Alexa best practices guide offers good advice to deal with this limitation: list only a few options at time, then ask if the user if they want to hear more. For longer lists, this pattern pairs nicely with the next one, where the user has to know what to ask by default, but can request options if needed.
In the “secret menu” pattern, Alexa asks a question, but doesn’t list options. Even though there are a set number of possible responses, they are not provided. The user must either take a guess or know what to do from experience. This is best for when there are simply too many possible options to name. For example, imagine opening up Siri and waiting for a list of all possible actions—it would take all day! The “secret menu” pattern works best on simple, straightforward requests with a broad range of acceptable responses, such as “what kind of music do you want to listen to?” This pattern’s effectiveness depends on the user’s skill level—it provides fast access to a broad range of features for anyone who knows what to ask. For new users, it can be hard to remember what Alexa can do, which is a common frustration.
To accommodate users as they learn, it’s best to be flexible with the kind of input you accept when using this skill. Try suggesting a few possible actions after a breakdown, even if you can’t name every option: “you can ask me to play an artist, an album, a song, or a radio station.”
The “confident command” is like the secret menu, except that it’s entirely prompted by the user. This may not actually be possible within an Alexa Skill until the technology improves, but the main function of your skill is probably already an “any time” command anyway: “Alexa, tell Standup to start a meeting.” Nevertheless, it’s important to keep in mind to use in your skills for any time options that could take place at any time during use. The confident command helps experienced users take control of the system’s functionality and personally tailor their own Alexa experience by asking for exactly what they want, when they want it.
Call and response
An exchange of several questions and answers in a row, all for the same purpose. These questions may either build on each other (dependent questions) or not (independent questions). These two formats solve different problems. Dependent questions are great for narrowing down choices (example below), and independent questions are good for gathering discrete pieces of information like configuration options. The “independent” call and response is laborious, so use it sparingly—never for common or recurring functions. Think through the logic and use as few questions as possible to get to the right answer.
Easy to learn, hard to master—this is a simple pattern that takes smarts to pull off. The educated guess simply prompts Alexa to guess what the user wants based on previously gathered user info. This pattern is most useful when a user is significantly more likely to want one or two options out of a larger set, but the system has to be reasonably confident to prevent wasting their time. For a good educated guess, Alexa needs a strong sense of the user’s mental models, behaviors, and context.
When it works, it’s delightful—the user feels like Alexa really knows them. If the guess is wrong, however, you get the opposite effect, potentially causing frustration.
These are just a few of the patterns that came out of our scriptwriting process. We’d love to hear what patterns you’ve encountered with your VUI scriptwriting!
ARE YOU DESIGNING A VOICE UI?
We’d love to hear about your experience! Join our voice UI channel on the Cooper Friends Slack, send us your favorite articles on Twitter or send a note to [email protected] to chat about your voice UI strategy.