Making sense of automotive information systems

As more information flows through automotive information systems, the UIs have become ever more complex and confusing. Drivers must sacrifice more and more valuable time and attention to find menus, enter information, and manage the integration of “after-market” devices, e.g. cell phones and MP3 players. Let’s take a fresh look at the layout of the console, and see if there are opportunities to clear up this confusion.

Today_Tomorrow_000_sm.jpg
Today: Notice that the console (3) isn’t optimized for either the primary driver vision axis (1), or the passenger (2).

In today’s cars, critical information — status, emergency signaling, speed, fuel, temperature, and RPM gauges — is located in the driver’s primary vision axis, behind the steering wheel. This minimizes the impact on the driver’s attention while driving. Current steering wheel controls often provide physical buttons to control various on-the-fly tasks — signaling, gear changing, cruise control, volume, back/next, take/drop a call — to ensure that the driver keeps his hands on the wheel.


The BMW 7 series HUD

In higher-end cars like the BMW 7 series, head-up displays (HUDs) are becoming standard. HUDs integrate simplified driving instructions, speed limits, and emergency information into the primary vision axis, reducing the need to look down even a couple of degrees. In fact, there’s even an app for this! It’s called aSmart HUD.

In more and more cases, the center console offers a multitude of functionality, including the setup of various systems, navigation and entertainment controls. This console delivers a potpourri of content intended for both drivers and passengers, and it’s placed directly in between driver and passenger, requiring both to move toward the middle in order to use it. From the driver’s point of view, passenger operation of this console can feel like a friend grabbing the mouse from the driver’s hand and taking over. Not pleasant, and potentially the beginning of an argument.

Why not break up the center console platform and re-focus on the two different user types?

Today_Tomorrow_001_sm.jpg
Tomorrow? Let’s optimize the content for each user.

The driver-oriented UI

Move the driver-related content into the driver’s primary vision axis behind the steering wheel, and provide access to supplementary content into the passenger area. There will be some overlap, of course: Radio and climate controls should be accessible by both. But wouldn’t it be nice to have two UIs tailored to the very different usage situations, rather than one general purpose UI?

Obviously, complex functionality and setup routines should be disabled while the car is moving, but the basics would live within the sphere of the driver. This would begin to make the driving experience more targeted, more functional, and hopefully safer. A platform with an enlarged display such as Ford’s Fusion SmartGauge 3 could supply this added functionality.

For enhanced controls while the car is stopped, the steering wheel could provide tactile “navigate & act” controls, such as multi-touch track pads or even a touchscreen. This would also avoid additional controllers such as Audi’s MMI, BMW’s iDrive or Lexus’s latest Remote Touch.

The passenger-oriented UI

As we’ve already seen with many current cars, passengers already have individual screens available, though these are mostly in the rear seats. Why not place all non-driving specific controls explicitly in the hands of a passenger? This could be a solely touch screen system because the passenger isn’t driving and therefore can focus 100% on input and navigation of the system. You could even take it one step further, and allow the passenger to modify the driver’s view with supplementary information — GPS directions, weather, and so on. This would support and enhance the driver/navigator dynamic, and get away from the current situation, which all too often leads to confusion and conflict.

What do you think? Read More

After-market device solutions: What are they good for?

Why are after-market casings so popular with consumers especially for portable devices? Are they just about protecting the product? Are existing product designs too boring? Have consumers lost confidence in the quality of product manufacturing? Or, do they just want to customize their devices to be unique and special, as we have seen in Asia’s extensive customization culture?

Fashion_Small.jpg
Leather, custom decals and heavy-duty rubber covers.

The iPhone is beautifully designed, engineered, and manufactured. Apple has used high-quality materials to avoid scratches and heavier damage that come along with daily use. There are no painted parts, which would easily scratch to reveal the substrate. The early complaint about the physical construction was that its sleek finish made the phone too slippery. The absence of grip details on the surface, and the aluminum casing of the first generation, made the problem worse. Apart from this flaw, the physical form of the iPhone is well-designed, and I think it has great potential to display the aged patina that comes from long life and high-quality materials. Which makes me wonder: Why cover it up with a cheap plastic cover? Read More

Beyond the touch screen

Since Apple’s introduction of the iPhone, it seems like everyone is excited at the possibility of implementing a touch screen, and why not? There are a lot of benefits to touch-screen interfaces: Extreme flexibility in visual and interaction design allows products and applications to be tailored for specific needs and audiences to target markets; less reliance on hardware controls means significant savings in mechanical cost; larger screens allow more opportunities for richness in states and animations; greater flexibility also means the possibility to reduce waste in the creation of longer-lasting devices with upgradable OS’s and software.

But with the flexibility of touch-screen interfaces come drawbacks. Typing is slower and less accurate than on a physical keyboard, and many functions require more taps than those tied to hardware controls. (Compare the number of taps required to access a single email on a Treo to the same action on an iPhone). There is tremendous opportunity to investigate how physical controls can be used in conjunction with touch screens in terms of on-device positioning, state functionality and force sensitivity behaviors to achieve an optimized balance in the end user experience.

To better understand these opportunities, I did a quick survey of some current and future products with this question in mind: How can hardware controls on portable devices integrate with touch screens to advance the current user experience?

Recent advances

There has been a great deal of progress made to improve usability, extend functionality and introduce more tactile feedback mechanisms to the touch interface experience:

  • Gyroscopic sensors for display format orientation and gaming
  • Proximity, light, motion sensing
  • Texture and material simulations
  • 3D simulation
  • Multi finger input technology
  • Audible and visual feedback for confirmation
  • Customizable functional key vibration
  • Physically moving displays to simulate a mechanical switch action

Reckoning with limitations

Information density still remains a major challenge in the design of portable touch interfaces. The human hand and fingers just don’t come in smaller sizes, so controls and functions must remain relatively large. At the same time, one wonders if older users even see the small on-screen buttons and icons or read font sizes smaller than 12 point. Is this a feasible platform for them or do they need specially-designed phones?



Nokia 5800 Xpressmusic

Physical navigation tools can help here. We know the stylus from prior PDAs; it was used for navigation, drawing, and text recognition. Not quite a portable device but the sketching pen displays offer a range of physical inputs such as trackpads, softkeys, pen pressure and angle sensitivity.

Nokia has added a stylus-like device, the Plektrum, to its 5800 Xpressmusic phone. (What’s next? Finger puppet navigation?) The primary drawback with a stylus is that two hands are necessary to operate the device; in addition, many younger people perceive a stylus to be uncool, according to research that I’ve performed in the past.

Read More