Beyond the touch screen
Since Apple's introduction of the iPhone, it seems like everyone is excited at the possibility of implementing a touch screen, and why not? There are a lot of benefits to touch-screen interfaces: Extreme flexibility in visual and interaction design allows products and applications to be tailored for specific needs and audiences to target markets; less reliance on hardware controls means significant savings in mechanical cost; larger screens allow more opportunities for richness in states and animations; greater flexibility also means the possibility to reduce waste in the creation of longer-lasting devices with upgradable OS's and software.
But with the flexibility of touch-screen interfaces come drawbacks. Typing is slower and less accurate than on a physical keyboard, and many functions require more taps than those tied to hardware controls. (Compare the number of taps required to access a single email on a Treo to the same action on an iPhone). There is tremendous opportunity to investigate how physical controls can be used in conjunction with touch screens in terms of on-device positioning, state functionality and force sensitivity behaviors to achieve an optimized balance in the end user experience.
To better understand these opportunities, I did a quick survey of some current and future products with this question in mind: How can hardware controls on portable devices integrate with touch screens to advance the current user experience?
There has been a great deal of progress made to improve usability, extend functionality and introduce more tactile feedback mechanisms to the touch interface experience:
- Gyroscopic sensors for display format orientation and gaming
- Proximity, light, motion sensing
- Texture and material simulations
- 3D simulation
- Multi finger input technology
- Audible and visual feedback for confirmation
- Customizable functional key vibration
- Physically moving displays to simulate a mechanical switch action
Reckoning with limitations
Information density still remains a major challenge in the design of portable touch interfaces. The human hand and fingers just don’t come in smaller sizes, so controls and functions must remain relatively large. At the same time, one wonders if older users even see the small on-screen buttons and icons or read font sizes smaller than 12 point. Is this a feasible platform for them or do they need specially-designed phones?
Physical navigation tools can help here. We know the stylus from prior PDAs; it was used for navigation, drawing, and text recognition. Not quite a portable device but the sketching pen displays offer a range of physical inputs such as trackpads, softkeys, pen pressure and angle sensitivity.
Nokia has added a stylus-like device, the Plektrum, to its 5800 Xpressmusic phone. (What's next? Finger puppet navigation?) The primary drawback with a stylus is that two hands are necessary to operate the device; in addition, many younger people perceive a stylus to be uncool, according to research that I've performed in the past.
Tmobile’s G1 offers a physical trackball controller to navigate in combination with its touch screen and phone, home and back hardware controls. Apple’s iconic Home button works for simple, uni-directional navigation, but what about more complex applications when multi-tasking may be necessary? Should the interaction design be solved via hardware or software controls, or a combination of the two?
Some manufacturers offer physical QWERTY keyboards in combination with the touch screen in order to supply superior haptic feedback to users who write a lot of e-mails or SMS’s. The physical form and texture of the keys can be designed to be ergonomic and unique, with reference surface details in the case of the F and J buttons. There are different QWERTY architectures available, below the screen, slide from the bottom or slide from the side. Again, there may be issues with two handed use with this device format, and it also results in a quite thick cross-section for a hand held device.
In current devices, hardware controls are often used to adjust volume, mute and lock the mechanisms. This allows these fundamental actions to be quickly accessible, and to offer instant physical feedback when the task is successfully accomplished. A physical side mounted scroll wheel, like the Trackwheel on some older Blackberry models, allows you to search and select items from a long list, providing the kind of fine-control feedback that you once got from the detents in a volume control. In Blackberry's case this was the only navigation control, and users often experienced thumb fatigue issues. On the other side this control type and on-device location avoids obstruction of the display, or accidental selection, which seems to always happens to me when I'm scrolling on the iPhone.
Perhaps you remember using your first Sony Walkman in your pocket without visual cues? There are definite functional benefits for hardware controls for different user activities such as initially finding the controller in a tactile sense without even looking, (especially in the automotive or medical environment). The human brain has significant memory capacity of location information relating to function and physical form details.
Wouldn’t it be great if while you were getting a call in a meeting you could push a special "hold message" button where the caller would get a personalized voice message, instead of having to run out of the room and whisper to your boss on the phone? (Cooper case study: Taking the call) Or a smart Copy & Paste button with different functional stages similar to physical slider controls on digital cameras? What about a physical macro button where you can determine your desired functionality similar to the idea of the Optimus Maximus button keyboard where each key has a small OLED.
Or at some point can we just use our thoughts, gestures or voice to get all this rich content in a really seamless way? Google’s voice search feature or the gyroscopic sensors might be the first answers.