If you’ve been to the stunning new California Academy of Sciences in San Francisco, you may have noticed a number of interactive exhibits in the halls on the first floor. Among them are two game-like pieces by Snibbe Interactive that allow visitors to physically interact with a projected “natural” environment via motion sensors.
Bug Rug by Snibbe Interactive at the Cal Academy of Sciences, from a video of the installation.
One is called Bug Rug and is set on the floor of a Madagascar forest with insects running around under fallen leaves and branches. Visitors can scare the bugs by stomping around, or they can trap them to learn more about them by guiding bait into traps with a very specific gestural interaction. In the other, Arctic Ice, visitors use their shadows to block the sun’s rays, allowing ice to form so that a baby polar bear can find its way back to its mother.
After watching kids play with both, and speaking with someone intimately involved in the installation of the works who’s watched people interact with both quite a lot, it’s pretty clear that visitors tend to be more engaged and successful with Arctic Ice than with Bug Rug. In pondering why this is the case (beyond the obvious fact that for most people, baby polar bears are a lot more compelling than bugs), I’ve landed upon the theory that the physical interaction of using one’s shadow to block the sun’s rays is a lot more natural and discoverable than placing one’s hands next to each other palm down, with thumbs touching to move things around on the ground.
With the increasing prevalence of physical and gestural interactivity, from the iPhone to Jeff Han‘s election night Magic Wall spectacle on CNN, to the Wii, it’s likely we’re all going to be faced with the excitement and challenge of interacting with and designing devices and environments in new ways. One of the biggest challenges associated with physical interactivity is the lack of transparency into the “commands” or actions available with a given device or environment. The graphical user interface was, in many ways, a huge improvement over the previous idioms of the command line because it made it much more obvious what commands were allowable in a given context. Looking into the brave new future of physical interactivity, we’re confronted with the need to create idioms and vocabulary that are as discoverable and useful as possible to avoid stepping back into command line-like arcana. As with every new input method and interaction paradigm, some believe that gestural interfaces will be a panacea that will automatically make everything easy to use; but there is a very real risk that this won’t be the case. Some physical interactions are pretty obvious — mapping the orientation of the Wiimote to the shaft of a tennis racket is a direct spatial relationship, and we can use our physical intelligence to intuit how it works pretty easily. It’s the same thing with pinch to zoom out on the iPhone. However, when we start to use gestural commands for abstract notions, we start to lose the benefit of our kinesthetic intelligence and things become a little (or quite a lot) less intuitive.
Dan Saffer’s new book Designing Gestural Interfaces is a great step towards defining a clear language of physical interactions. The book provides a solid overview of the important things to consider when designing for touchscreens and motion-sensitive controllers, as well as good design practices like prototyping and documentation. For me, the real meat of the book is the discussion of patterns like “spin to scroll,” and “wave to activitate,” as well as the catalog of gestures that could be used as the basis of a physical control idiom (like “shake head no”).
Both of these sections should provide good food for thought as you contemplate how to get beyond simple point-and-click interactions. Because gestural commands can be much less obvious to users than those written on buttons and menus in a GUI, it seems pretty likely that building off existing patterns is going to make your product or environment a lot easier to use, or at least easier to learn.
I also appreciate the section entitled “Communicating Interactive Gestures,” which describes how to provide affordance and express interaction idioms to users with written instructions, illustration and demonstration. I would have liked to have heard a little more about using animated and audio feedback to motivate physical action (beyond simple demonstration). While this is something that you’d more typically find in games, there are great possibilities for using dynamic feedback to help users learn better control over a physical input mechanism.
The book is full of good examples of actual gestural interfaces, and it’s a great leaping-off point for learning even more than the generous serving that Dan has offered up. I’d absolutely recommend that anyone with even a passing interest in moving beyond the keyboard and mouse give it a thorough read.