Cooper has been studying voice UI for the last year and decided get our hands dirty in partnership with Carbon Five to create an Alexa Skill of our own. Below you’ll see about how we translated our interaction design methods to work for a very new kind of interface, voice.
Start with an idea
One drizzly day, Cooper and Carbon Five huddled in a conference room and spent hours getting familiar with Alexa’s platform, its limitations, and its patterns, and how Skills work. “Skills” are essentially apps geared to Amazon’s voice UI platform, Alexa. There are 10,000+ Skills, for everything from banking, to Jeopardy! to Cat Facts.
Then we took to post-its to brainstorm ideas for Skills we could use in an office setting. We considered Skills to organize happy hours, capture meeting notes, and find missing coworkers. Eventually, we landed on a Skill that helps Agile teams manage their standups.
If you’re not familiar, an ‘Agile standup’ is a daily meeting where project teams come together for a quick, 15-minute sync. They share what they’re doing today, what they did yesterday, and where they’re stuck. Ideally, an Agile standup encourages accountability, problem-solving, and team bonding. However, standups frequently go sideways, and we wanted to identify a way to keep them on track.
Apply the Process
A few weeks later, Carbon Five and Cooper met again to create proto-personas and scenarios. Problem in hand, we used Cooper’s Goal-Directed Design method to flesh out the vision and the foundation for our Skill.
The Carbon Five team uses Agile standups, so they served as subject matter experts. Cooper Managing Director Nate Clinton interviewed them, focusing on process, behavior, and pain points. We discovered several behavioral patterns, leading to three proto-personas: Bryan, Cheryl, and Johnny.
These proto-personas were helpful in the process of designing our Skill. We applied them quickly to ground our solution in real-world needs. (Sidebar: For most projects, Cooper designers would create personas from insights and behavior patterns uncovered in extensive, ethnographic research. For this particular project, the team went with a lighter-weight approach.)
With these archetypes defined, we generated ideas for scenarios where the ‘Standup Skill’ would be used. We gave our scenarios titles like “Where’s Johnny?” and “Bryan gets the meeting back on track.” Then we used fleshed out these titles into narratives that explored how the proto-personas and the Skill would work together.
Scenarios also applied perfectly to the voice UI design process, bridging the modeling phase with scriptwriting.
On UX design projects, after working through scenarios, Cooper designers typically wireframe or sketch out interfaces and layouts. With regard to voice UIs, there aren’t interfaces or layouts to wireframe. Instead, the design team wrote the dialogue or script that would be the interaction foundation for our Skill.
In our next blogpost, we’ll wrestle with writing a script for Alexa and running our very first prototalking session (the word we coined to describe prototyping for voice UI).
We love talking tech at Cooper, but it’s not just the machines we want to hear from! We want to know your perspective on the future of voice UI. Stay in touch with us through our voice UI channel on the Cooper Friends Slack, send us your favorite articles on Twitter or send note to [email protected] to chat about your voice UI strategy.