Austin in SXSW – The Digital Master (1 of 3)

It used to be the case that we understood computation as a representation of the real world around us. It was used to model effectiveness of bombs, cities, or patterns of life. But that has flipped. Now the physical world around us is an instantiation of a digital source. Our source used to be an analog, in the case of photography, a negative. The source is no longer analog atoms, but rather a digital master. This is the first of a three part series. Follow the rest of the conversation in part 2 and part 3

Austin, March 11, 2:50pm: You’re staring at your phone, desperately trying to figure out the most appropriate, break-through, next-level place you could possibly go. But you’re also moving, your feet propel you forward guided by the over flowing list of lives you could be living at 3:00pm today. Welcome to the crowd of SXSW’13, a hoard of nerds, some of whom you’ve highlighted as potential friendships, contacts, and maybe something more. Jumping to your other compass, the twitter-sphere, you search for what’s good in the last 2 minutes. Expo G? You’ve got a good 10 minute walk. It starts to rain, and you see a swarm of folks donning red ponchos with a line emerging behind them. Just in time, you happily wear a url in exchange for a dry walk to the next venue. Despite bumping into other tilted head walkers, you find yourself in a massive conference room, ready to be inspired, snap an instagram, and grab some quotable references for your tumblr later on. Halfway through the talk, it hits you: ‘what’s next?’ You pull out your shiny glass master and realize 4:00pm promises 13 potential futures. The notion gives you pause. Imagine, what would SXSW be without the net? No digital schedule, website swag, no live tweeting, no ambient cloud of intent. Just a room with a bunch of people talking. For better or for worse, our reality has flipped, what was once a world of physical things organized by people, is now a world of digital things augmented by people. We look down for orientation, and up for verification. I’d like to share with you how SXSW taught me to stop worrying and learn to love the new master.

The digital master of the built environment

Making plastic junk is now a digital pursuit. One of the first unveilings at SXSW was a consumer level 3D scanner. A couple of years ago the makerbot was released with a promise to disrupt how real things are made. The cycle is now complete with the ability to scan an object into a digital mesh. The mesh can then be modified and printed out to a new plastic object. This is consumer level! For the price of a PC in 93, you can purchase a 3D scanner and printer.

The demo object (scanned and printed) was a garden gnome, once of many crapjects waiting to happen.

Read More


Poor Alexi Devers: Bitten by a “dog,” then finding himself naked in a park on the morning after the next full moon, a pulpy mess of unidentifiable victim, dewey and glistening on the ground around him. News stories that day confirm that a terrible murder has taken place by a rabid “dog,” and Alexi looks up from the paper with the wide-eyed stare of the recently diagnosed. What will he tell Debbi, his girlfriend? How will he keep her safe? Fortunately for him, after a Google search and a few false leads, he discovers WereSafe, a service for people with “dog” problems just like him. It’s expensive, sure, but what choice has he got? One web form and credit card number later, he’s joined the service and a special package is on the way.

The WereSafe service has two main service aspects. One to keep the monster contained, and the other to hide the problem from the innocent.

Read More

Strategies for early-stage design: Observations of a design guinea pig

Where do you start when you’re approaching a complex software design problem? If you work on a large development team, you know that software engineers and UX designers will often approach the same design problem from radically different perspectives. The term “software design” itself can mean very different things to software architects, system programmers, and user experience designers. Software engineers typically focus on the architectural patterns and programmatic algorithms required to get the system working, while UX designers often start from the goals and needs of the users.

In the spring of 2009, I participated in a research study that looked at the ways in which professional software designers approach complex design problems. The research study, sponsored by the National Science Foundation, was led by researchers from the Department of Infomatics at the University of California, Irvine. The researchers traveled to multiple software companies, trying to better understand how professional software designers collaborate on complex problems. At each company, they asked to observe two software designers in a design session. At my company, AmberPoint, where I worked at the time as an interaction designer, I was paired with my colleague Ania Dilmaghani, the programming lead of the UI development team. In a conference room with a whiteboard, the researchers set up a video camera, and handed us a design prompt describing the requirements for a traffic control simulation system for undergraduate civil engineering students. We were allotted two hours to design both the user interaction and the code structure for the system.

Jim-and-Ania-at-the-whiteboard.jpgJim Dibble and Ania Dilmaghani at the whiteboard in their research design session

Read More

When is design done?

“We don’t finish the movies, we just release them.”
– John Lasseter of Pixar

It’s easy to think of design as an ongoing iterative process, a constant refining that never reaches an objective “end.” It is especially easy to think of software in this way. Because code isn’t static, design of software is relatively dynamic, able in many situations to alter direction or incorporate new functionality without overturning initial design-framework decisions. While this can be true, it is also possible for design to reach a state which is done. Not simply done for the next release, but where design reaches finality. The design no longer carries the evolution of the product forward.


Once design reaches a stage in which the difference between versions is more window-dressing, or a change in interaction approach, rather than a realization of deeper functional improvements, design is done. When the ideas on how to improve a design no longer come, when the designers can no longer see a way to improve the idea, it is done. It isn’t that someone else couldn’t take the idea and evolve it, but that the stewards of the design reach a point where their collective imagination can’t move the product forward.

Design which is not done

It’s easy to find examples of design which isn’t done. Lots of first generation software is released delivering basic functionality. Later versions fill out with functionality, growing to meet the latent potential in the first version. This design isn’t done.

Early designs of Evernote promised much more than was delivered. Successive versions cast and recast the design until the initial flaws could be worked out. Early versions provided little more than a limited word processor that stored stuff in the cloud. The interaction paradigm was a little strange and frustrating. Evernote continues to be a design in process. Functionality continues to evolve and improve with each release; the design isn’t done.

Mature software may not be done either. Photoshop versions 5, 7 and 8 delivered significant design shifts. Paradigms for working with text, the inclusion of vector images and interface for handling of RAW images marked major departures from previous versions. As an 11-year old product the design of Photoshop accomplished remarkable adaptation and revealed the “incompleteness” of prior designs. Of course the design leveraged advances in technology which were not available for earlier versions, but that’s the point. The design wasn’t done, design could still be used to improve the program, to advance what it did and how it did it.

Design of non-software products may also reveal a level of “not done.” A baby stroller from BumbleRide is “done” in the sense that you can purchase one and it works. The design is largely coherent and shows evidence of finish. But even here the design isn’t finished. A comparison of the 2008 and 2009 versions shows significant advancement of the design even though each of the versions was sold as a completed design. Wheels gained quick-release, the safety harness adopted a single button release, and the sun hood extended for more coverage. So is the design done now? I’d argue no. Improvements in ergonomics, materials, and signage all provide ripe areas for the design to continue to evolve.

When it reaches “perfection”

Design isn’t done when it reaches a pinnacle of efficiency or goodness. Done isn’t really a measure of quality or perfection. Many products never reach any level of quality or refinement. They represent evolutionary dead ends, still-born ideas with no potential in which to grow. They are poorly conceived, even if executed well. Crappy products may arguably be done before they are ever built or coded. The lack of vision from the start dooms the product to an evolutionary dead-end before it’s even born. If perfection is the measure of done we don’t have any way to agree on what is perfect or good. Perfect doesn’t give us a way to evaluate done.

When it feels done

Subjective evaluations by the creator may be acceptable in the realm of art. Artists work until the piece is “done;” till they feel the idea has been expressed. Design of products whether software or hardware need more objective measures than feelings. In part, designers need this because the act of creation relies on a whole team, not just an individual. We also need measures because products exist in a marketplace; there are deadlines, ship dates, feature sets, marketing and sales efforts, which require more clarity around when the design will be done.

When the time or money runs out

For consultants, work is “done” when the contract (time) is up. Projects are scoped to meet specific deadlines and requirements which fit those timelines. Design deliverables are iterative, each pass we give moves a level deeper and we work out more of the design details. We give great value for our time, but design is “done” when we run out of time. Our design is rarely done in the sense that every detail has not been worked out, all the possible problems have not solved. We work down from larger more fundamental patterns and frameworks, iteratively filling in the details. The big picture may be done when we deliver, but often it is internal product owners or developers who will actually “finish” the design.

When the requirements are met

It could be argued that design is “done” when the initial requirements have been met. It’s done enough to release a version, but it’s not really done. After the product ships the design team refines the design, adding in features or fixing issues which shipped in the previous version. The designers work to fulfill the full potential of the product. As long as their work produces advancements the design isn’t done.

When innovation plateaus

Design is done when its evolution plateaus. A couple of versions are released with little more than rearranging the deck chairs. Rework or changes to the interface reflect passing fashions rather than fundamental shifts in direction or functionality. Innovations in the marketplace or in technological breakthroughs are not incorporated or addressed in the design. Evolution grinds to a halt, the product ceases to advance in meaningful ways.

Design continues on many products long after the design is done. Design effort is wasted in chasing a market rather than leading one. Products become bloated with features which compromise the clarity achieved when the design reached “done.” Features are designed which don’t evolve the product; they complicate the vision reaching to be all things to all people, ultimately hobbling the product. The design of Microsoft Word has delivered little beyond version 5.1. It is a quite usable word processor, but the design for word processing was solid in 1991, in the subsequent releases little was advanced. Features where added that did little to improve the word processing experience. The design also failed to take advantage of shifts in the marketplace or technology. Five versions later Word is still largely a pretty good word processor. While much has changed in the interface switching interaction paradigms from menus to the ribbon can hardly be thought of as a fundamental shift in functionality. Word hasn’t evolved so much as changed it’s wardrobe.

Some products manage to react to changes in technology or marketplace. The design accommodates changing needs and opportunities. The product evolves through design to include new functionality, utility and continues to add life to the product. While Adobe Acrobat Pro has struggled with its share of bloating and incessant updates, the design of the program has continued to evolve. From humble beginnings of a reader/writer for portable documents, Acrobat has incorporated new functionality unimaginable when the product was initially designed; OCR of text, automatic creation of interactive forms, document markup, digital signing and distributed workflows. The integration of this new functionality has stumbled at times, but Acrobat X delivers a coherent, usable evolution of a product that is more than 17 years old. What was just latent potential in the embryonic design of the first versions of the product has been realized.

Some products are so deeply bound to a specific paradigm that the only reasonable evolution is an entirely different approach. The original design is done. A new product, with a different design, is created to address new technology, and a new marketplace. The original iPod‘s design is done. The scroll-wheel/menu design of an mp3s player was groundbreaking and brilliant, and it was well-executed. At some point it became clear that this design was done; it couldn’t evolve while maintaining the same core design. The only road forward was to abandon this “done” design, and adopt a new paradigm. The result was the iPod Touch. The shift was more than simply adding a bigger screen with touch input; what the product could do radically shifted.

Why does it matter?

It is important to acknowledge that design can reach a place of “done.” If we don’t, we may end up fooling ourselves that we are moving products forward when we are really just treading water. If we can’t step back and evaluate whether a design is done, we may continue to put effort into a product which we can’t improve. We will continue to release products that don’t help people achieve their goals, or worse–damage great products by bloating them with features no one needs. Knowing when the design is done allows us to recognize when our efforts will be productive and when our efforts will be wasted. When design is done it’s time to move on, to take up new challenges or products and start designing again. Read More

Creating immersive experiences with diegetic interfaces

I like to think of Interaction Design in its purest form as being about shaping the perception of an environment of any kind. Yes, today the discipline is so closely tied to visual displays and software that it almost seems to revolve around that medium alone, but that’s only because as of now, that’s pretty much the only part of our environment over which we have complete control.

The one field that has come closest to overcoming this limitation is the video game industry whose 3D games are the most vivid and complete alternate realities technology has been able to achieve. Game designers have control over more aspects of an environment, albeit a virtual one, than anyone else.

Lately I’ve been thinking a lot about this idea that interfaces can be more closely integrated with the environment in which they operate. I’d like to share some of what I’ve learned from the universe of video games and how it might be applicable to other kinds of designed experiences.

In Designing for the Digital Age, Kim Goodwin criticizes the term “Experience design” as being too presumptuous because we don’t really have the power to determine exactly what kind of experience each person with their own beliefs and perceptions has. Even when we work across an entire event from start (e.g. booking a flight) to finish (arriving at the door), there are still countless factors outside our control that can significantly impact how a person will experience it.

Video game designers on the other hand can orchestrate a precise scenario since almost every detail in their virtual world is for them to determine. They can arrange exactly what kind of person sits next to you on a flight no matter who you are or how many times you take that flight.

That isn’t to say that videogames don’t have their limitations. Of course, it isn’t completely true that game designers can determine who sits next to you. They can only determine who your avatar sits next to. The most significant weakness of videogames is the inability to truly inhabit a designed environment or narrative. As much control as we may have over a virtual world, as long as we are confined to experiencing it through television screens and speakers, it won’t be anywhere near comparable to our real world.

Fortunately, there’s a growing effort to address this lack of immersion.

A key area of the problem lies in how we’re presented and interact with complex information diegetically, that is, interfaces that actually exist within the game world itself.

The 4 spaces in which information is presented in a virtual environment

Before continuing, it helps to be familiar with some basic concepts and terminology around diegesis in computer graphics, the different spaces of representation between the actual player and their avatar. The diagram above illustrates the four main types of information representation in games.


Non-diegetic representations remain the most common type of interface in games. In first person shooters, arguably the most immersive type of game since we usually see the scenery through our avatar’s view, the head-up display has remained an expected element since Wolfenstein 3D first created the genre. Read More

Making sense of automotive information systems

As more information flows through automotive information systems, the UIs have become ever more complex and confusing. Drivers must sacrifice more and more valuable time and attention to find menus, enter information, and manage the integration of “after-market” devices, e.g. cell phones and MP3 players. Let’s take a fresh look at the layout of the console, and see if there are opportunities to clear up this confusion.

Today: Notice that the console (3) isn’t optimized for either the primary driver vision axis (1), or the passenger (2).

In today’s cars, critical information — status, emergency signaling, speed, fuel, temperature, and RPM gauges — is located in the driver’s primary vision axis, behind the steering wheel. This minimizes the impact on the driver’s attention while driving. Current steering wheel controls often provide physical buttons to control various on-the-fly tasks — signaling, gear changing, cruise control, volume, back/next, take/drop a call — to ensure that the driver keeps his hands on the wheel.

The BMW 7 series HUD

In higher-end cars like the BMW 7 series, head-up displays (HUDs) are becoming standard. HUDs integrate simplified driving instructions, speed limits, and emergency information into the primary vision axis, reducing the need to look down even a couple of degrees. In fact, there’s even an app for this! It’s called aSmart HUD.

In more and more cases, the center console offers a multitude of functionality, including the setup of various systems, navigation and entertainment controls. This console delivers a potpourri of content intended for both drivers and passengers, and it’s placed directly in between driver and passenger, requiring both to move toward the middle in order to use it. From the driver’s point of view, passenger operation of this console can feel like a friend grabbing the mouse from the driver’s hand and taking over. Not pleasant, and potentially the beginning of an argument.

Why not break up the center console platform and re-focus on the two different user types?

Tomorrow? Let’s optimize the content for each user.

The driver-oriented UI

Move the driver-related content into the driver’s primary vision axis behind the steering wheel, and provide access to supplementary content into the passenger area. There will be some overlap, of course: Radio and climate controls should be accessible by both. But wouldn’t it be nice to have two UIs tailored to the very different usage situations, rather than one general purpose UI?

Obviously, complex functionality and setup routines should be disabled while the car is moving, but the basics would live within the sphere of the driver. This would begin to make the driving experience more targeted, more functional, and hopefully safer. A platform with an enlarged display such as Ford’s Fusion SmartGauge 3 could supply this added functionality.

For enhanced controls while the car is stopped, the steering wheel could provide tactile “navigate & act” controls, such as multi-touch track pads or even a touchscreen. This would also avoid additional controllers such as Audi’s MMI, BMW’s iDrive or Lexus’s latest Remote Touch.

The passenger-oriented UI

As we’ve already seen with many current cars, passengers already have individual screens available, though these are mostly in the rear seats. Why not place all non-driving specific controls explicitly in the hands of a passenger? This could be a solely touch screen system because the passenger isn’t driving and therefore can focus 100% on input and navigation of the system. You could even take it one step further, and allow the passenger to modify the driver’s view with supplementary information — GPS directions, weather, and so on. This would support and enhance the driver/navigator dynamic, and get away from the current situation, which all too often leads to confusion and conflict.

What do you think? Read More

The Drawing Board: Feeding the Cats

Here at Cooper, we find that looking at the world from the perspective of people and their goals causes us to notice a lot of bad interactions in our daily lives. We can’t help but pick up a whiteboard marker to scribble out a better idea. We put together “The Drawing Board“, a series of narrated sideshows, to showcase some of this thinking.

The best-rated automatic cat feeder on Amazon has some serious interaction design problems, risking both well-fed cats and confident owners. In this Drawing Board, Cooper designers turn their attentions to the machines that take care of our four-footed friends.

Credits: Chris Noessel and Stefan Klocek. Read More

Into the groove: Lessons from the desktop music revolution

(Originally published in interactions magazine, I’ve expanded this a bit to include more examples.)

Musical instruments provide really intriguing examples of user interface design. While it can take years of training and no small amount of aptitude, an instrument in the right hands can provide highly nuanced control over the many aspects of sound that come together to form one of the highest forms of human expression. And even for those of us who will never achieve such heights of virtuosity, merely using such a “user interface” can result a great sense of enjoyment, immersion and fulfillment (what is often referred to as a state of “flow”).

Music is almost universally important to human culture, but instruments are not strictly “useful” and it seems strange to think of them as mere tools. That said, from the first bone flutes and stone tools, the evolution of musical instruments has closely paralleled to that of more utilitarian technology. As inventor and futurist Ray Kurzweil puts it, “[musical expression] has always used the most advanced technologies available.”

Not surprisingly then, as with so many other things, the dramatic increase of processor speeds has brought about a revolution in the way people use computers to make music. But, while computational power has been a critical enabling factor in this revolution, at least equally as important has been the ongoing evolution of the user interfaces of these new digital instruments.

novation launchpad
The Novation Launchpad, a hardware controller specifically designed to work with Ableton Live running on a computer.

A recent history of musical technology and interactivity

As with the broader universe of technology, musical instruments have co-evolved with the practice of music. New technologies are often first introduced as a way of replicating and incrementally improving upon a previously established way of doing things, and then they may eventually point the way to something entirely new. In the same way the first cars were designed as “horseless carriages,” synthesizers were largely first looked to as means to emulate the sounds of acoustic instruments, and it took decades before electronic sounds became aesthetically appealing in their own right. Read More

Stratus Air: A Cooper concept project

When we saw the topic of this year’s I.D. Magazine Annual Design Review concept category, we thought it would be fun to put together an entry. As frequent travelers, we were particularly inspired by the brief: design a graphic, object, or environment that would improve the experience of air travel.

We thought our approach was a good mix of practicality and inspiration; a premium loyalty service enabled by helpful bits of technology that would ease the pain and smooth the turbulence of business travel. Did we expect to win? Absolutely. Even though the judges didn’t share our enthusiasm, we’re happy with what we came up with, and we wanted to share it with you.

We present Stratus Air.

(To view at full screen HD, click the little icon with 4 diagonal arrows next to the Vimeo logo.) Read More

After-market device solutions: What are they good for?

Why are after-market casings so popular with consumers especially for portable devices? Are they just about protecting the product? Are existing product designs too boring? Have consumers lost confidence in the quality of product manufacturing? Or, do they just want to customize their devices to be unique and special, as we have seen in Asia’s extensive customization culture?

Leather, custom decals and heavy-duty rubber covers.

The iPhone is beautifully designed, engineered, and manufactured. Apple has used high-quality materials to avoid scratches and heavier damage that come along with daily use. There are no painted parts, which would easily scratch to reveal the substrate. The early complaint about the physical construction was that its sleek finish made the phone too slippery. The absence of grip details on the surface, and the aluminum casing of the first generation, made the problem worse. Apart from this flaw, the physical form of the iPhone is well-designed, and I think it has great potential to display the aged patina that comes from long life and high-quality materials. Which makes me wonder: Why cover it up with a cheap plastic cover? Read More