Why is Google Maps on a mobile device so amazing and delightful? Why does Word Lens feel so mind-blowing? Why does a Prius feel so good when you get in and go? Why does it feel satisfying to look down at the lighted keyboard on the Mac?
It is noteworthy when the design of an experience is so compelling that you feel wonder and delight. When designed right it feels totally natural, some might even say it is truly “intuitive.” No training is needed, no set-up, no break in flow, the tool fits seamlessly, improving without disrupting your experience; it’s like a little bit of magic.
So how to design the delightful, magical experience?
In the digital world magic experiences are more likely to follow technology breakthroughs. New ways to give input (touchscreens, gestures, sensors), output (3D, haptics), and raw processing (speed, power) all provide opportunities for unexpected delight. These days passive input is an especially rich field because devices have many more sensors, and the raw processing power is ample enough to provide real-time turnaround on data-intensive tasks.
I’m using passive to describe input which is largely listening and processing signal which is self-identified, as opposed to active input where signal is initiated by the user with specific intent. Active input is keyboard, mouse, touch, and gesture. Passive input is background processing of optical, audio, kinesthetic, or other signal, and programmed response to this. In reality there’s more of a spectrum between active and passive, not a strict divide.
If you’e got a smart phone, a Mac, or a new car, chances are your experience is augmented with passive input magic. GPS, accelerometer, light sensor, mic, OCR, RFID, and facial/object recognition are all used as passive input. But passive input signal alone isn’t going to deliver delight. It’s what you do with the signal which is where the magic happens.
Fully passive input, quietly helping in the background
Some sensors run in the background, quietly listening for the right signal which tells them to kick in and help. The fact that you don’t need to switch into a mode first makes the experience smooth and seamless, the magic just happens.
Your Macbook pro uses an optical sensor to evaluate the amount of ambient light. When the light falls below a certain threshold, back-lighting under the keyboard improves your ability to visually target the keys.
It’s a small feature but it’s super delightful. Your computer “knows” when to assist and jumps in to provide illumination. You don’t need to interrupt what you’re doing, but an automatic and subtle shift in the experience makes it better.
As more cars adopt it, the keyless entry and ignition of the Prius may seem unremarkable, but the design is still delightful.
Walk up to your locked car with your keys in your pocket, without using your keys reach for the handle and the car opens. Sit down and press the start button with the keys still in your pocket. Security is a necessary evil, allowing it to recede to the background of the experience is delightful.
Imagine speeding along the freeway, you become lulled into a less aware state, and suddenly traffic ahead is at a full standstill. Your reaction time is not what it should be, you didn’t notice until it’s too late to stop. But your Volvo S60 has been paying attention, it’s been watching out for you and when it senses the stopped traffic it applies the breaks for you. Its finely tuned system adjusts the breaking to match the distance and your car stops a few feet from the bumper of the car ahead of you.
The radar system in your car searches for possible collisions with other cars or pedestrians, warning you and even taking action if you don’;t. There’s lots to go wrong if the system doesn’t correctly identify and react to danger, but if it works as designed the experience is magic. Your car ceases to be a dumb hunk of metal hurtling around the roads and becomes an intelligent agent, working to keep you safe and protected. If the system only gave you a warning the experience of driving would be interrupted, instead it takes action and assists, improving your ability to drive safely.
Modal passive experiences, input with a little prompting
The form factor and design of hand-held devices often forces you to open an app (entering a mode) before delivering passive input goodness. The limited processing power, battery life and screen size simply doesn’t allow for the activity to fade to the background. Also, no good interaction paradigm has been created which allows for automatic, smooth and intelligent switching between passive input modes. If it existed this would facilitate less modal choices, delivering instead the right assistance at the right time, without requiring a prompt from you.
There are a number of great apps that work with passive input once launched which deliver super delightful experiences.
Open Google Maps on a mobile device and it not only displays a map, but it pinpoints where you are and shows it. Move your device and the map moves to reorient based on which direction you are facing.
It’s a magical experience because the map ceases to be an abstract puzzle. Its sensing and orienting to your position makes the map personal, it becomes an augmentation of reality, another view of where you are. This transformation is subtle, but deeply satisfying. The map unifies with the territory, its utility shifts from planning the route to navigating it in real-time.
Recent versions of iPhoto come with an amazing feature, the ability to automatically recognize and tag faces in your photo collection.
Once established the accuracy with which iPhoto performs this task is amazing. Add new photos to your collection and iPhoto figures out who’s in them and organizes appropriately. It’s a satisfying and delightful experience once you’ve trained it. It doesn’t take a huge amount of time, but training anything takes away from the magic. Somehow training feels like doing work, you are part of the magic trick, not fully able to sit back and enjoy the show.
Nuance’s Dragon voice transcription software used to take hours to train to recognize your voice. The experience was tedious and time consuming. This upfront work made it hard to feel wow’ed once it started working.
Today you can download a free app for your iPhone and simply start speaking. It’s a pretty delightful experience because it just works. Your voice is instantly transformed into text.
The first version of Red Laser was novel, but failed to delight. What shifted the experience was eliminating the management and preparation required.
Version one required users to take a clear photo of the bar-code within a tight frame. The second version just asks users to point the video camera and loosely target the bar-code. It feels easier, more natural, more like how the eye operates. The experience improved and “adoption rates shot up”:http://www.uxmag.com/strategy/how-ux-can-drive-sales-in-mobile-apps.
Point Word Lens at a sign in a foreign language and instantly read it in your native tongue.
A lot is going on under the hood to deliver this magical experience. Optical character recognition of the foreign text from a video grab, translation of it, and overlaying the video with replacement text, all in real-time. Of course you don’t see any of this, and that’s part of the magic. Your experience isn’t interrupted, you see video that one instant is Spanish and the next is English. The video feed just got a lot more useful. Of course they also need to get the translation at least part-way right or it doesn’t matter.
Pleco is another iPhone app that translates printed text. Like Word Lens it uses the live video feed. Unlike Word Lens the translation is fed back to you in a text box at the bottom of the screen instead of overlaid into the video feed.
This may seem like a small difference, but it interrupts and abstracts the experience. If your goal is to learn another language, seeing both at the same time would be useful, but for simply understanding, replacing the foreign language in place is a far more delightful experience. It feels more natural, and the interface itself doesn’t dominate the presentation.
Sometimes all you need to do is give permission for the magic to happen. When you land on a web page in a foreign language, Google Chrome browser automatically takes action and offers to translate the page for you. If you accept, the page quickly transforms before your eyes, into language which you can understand.
What makes this delightful is that you are prompted at the right moment, with assistance which is immediate, and doesn’t take you out of your browsing experience. The page maintains its layout, but suddenly the words are all familiar. Links still work, images still show up, it’s almost as if a part of your brain has been turned on which can suddenly understand Japanese.
The promise of Jubbigo is huge: you speak into your phone in English, your phone translates what you say and speaks it in Japanese.
In practice Jubbigo gets the job done, but it never feels magic or delightful. You aren’t forced to switch modes which is good; you speak words in and a spoken translation comes out. But your experience is still interrupted with the significant amount of processing time between input and output. The lag time between speaking and hearing the translation is more than noticeable, it dominates the experience. To be delightful you need near-instant performance. Google Maps works well because it finds your location immediately, Word Lens wows because the words are replaced in near real-time.
What makes a delightful, magical experience?
* Transformation must occur, adding utility, meaning, or even useful action
* It must happen without delay
* The transformation must maintain fidelity and accuracy to the original
* The transformation shouldn’t interrupt the larger experience
* The less abstract, the more magical
* The less management/preparation the more satisfying
Take a look through the examples above. The ones that really delight are those that meet more than a few of these principles. When experiences show promise, but don’t deliver you can see they fail to play by these principles.