Your Flat Design is Convenient for Exactly One of Us

Illustration built on creative commons 2.0 Portrait of a Man by Flickr user and photographer Yuri Samoilov

I’m OK with fashion in interaction design. Honestly I am. It means that the field has grappled with and conquered most of the basics about how to survive, and now has the luxury of fretting over what scarf to wear this season. And I even think the flat design fashion of the day is kind of lovely to look at, a gorgeous thing for its designers’ portfolios.

But like corsets or foot binding, extreme fashions come at a cost that eventually loses out to practicality. Let me talk about this practicality for a moment.

In The Design of Everyday Things, Donald Norman distinguished between two ways that we know how to use a thing: information in the world, and information in your head.

Information in the world is stuff a user can look at to figure out. A map posted near the subway exit is information in the world. Reference it when you need it, ignore it when you don’t.

Information in the head is the set of declarative and procedural rules that users memorize about how to use a thing. That you need to keep your subway pass to exit out of the subway is information in your head. Woe be to the rider to throws their ticket away thinking they no longer need it.

For flat design purists, skeuomorphism is something akin to heresy, but it’s valuable because it belongs to this former category of affordance: it is information in the world. For certain, the faux-leather and brushed-aluminum interfaces that Apple had been pumping out were just taking things way too far in that direction, to a pointless mimicry of the real world. But a button that looks like a thing you can press with your finger is useful information for the user. It’s an affordance based on countless experiences of living in a world that contains physical buttons.

Pure, flat design doesn’t just get rid of dead weight. It shifts a burden. What once was information in the world, information borne by the interface, is now information in users’ heads, information borne by them. That in-head information is faster to access, but it does require that our users become responsible for learning it, remembering it, and keeping it up to date. Is the scroll direction up or down this release? Does swipe work here? Well I guess you can damned well try it and see. As an industry now draped in flat design, we’ve tidied up our workspace by cluttering our user’s brains with memorized instruction booklets for using our visually sparse, lovely designs.

So though the runways of interaction design are just gorgeous right now, I suspect there will be a user-sized sigh of relief when things begin to slip a bit back the other way (without the faux leather, Apple). Something to think about as we gear up our design thinking for the new year.

What’s culture got to do with it?

In short, everything.

Does your work culture make it challenging for your team or organization to do great work? Well this could be the year you make it better. At Fluxible 2013, our very own Teresa Brazen, Design Education Strategist, delivered a 30 minute talk, Make Culture, Not War: The Secret of Great Teams and Organizations, about the role and impact of culture on organizations. Of particular interest to the UX crowd, she explained how designers are uniquely positioned to influence culture by employing familiar tools from our bag of tricks. Enjoy (and share!) the video below!

If you’re interested in getting your hands dirty, you (and perhaps a few folks from your team/organization) might want to check out our newly launched 1-day Designing Culture Master Class. This training aims to help people intentionally approach their team or organizational culture – through a cultural assessment, visioning and goal-setting exercises, and development of a tactical plan to improve their culture (some of the topics Teresa hits on in her talk below). Teresa and Susan Dybbs, Managing Director of Interaction Design at Cooper, will be teaching the course in our San Francisco offices on Friday, January 31st.

We are also offering Designing Culture in-house training for organizations that would benefit from having a larger group (management, teams, etc) go through this process together. Contact us at cooperu@cooper.com for details.

Video: Make Culture, Not War: The Secret to Great Teams & Organizations

Summoning the Next Interface: Agentive Tools & SAUNa Technology

Cooper’s new Design the Future series of posts opens the door to how we think about and create the future of design, and how design can influence changing technologies. Join us in this new series as we explore the ideas behind agentive technology, and summon a metaphor to help guide us to the next interface.

Part 1: Toward a New UX

If we consider the evolution of technology—from thigh-bones-as-clubs to the coming singularity (when artificial intelligence leaves us biological things behind)—there are four supercategories of tools that influence the nature of what’s to come:

  1. Manual tools are things like rocks, plows, and hammers; well-formed masses of atoms that shape the forces we apply to them. Manual tools were the earliest tools.
  2. Powered tools are systems—like windmills and electrical machines—that set things in motion and let us manipulate the forces present in the system. Powered tools came after manual tools, and took a quantum leap with the age of electricity. They kept becoming more and more complex until World War II, when the most advanced technology of the time, military aircraft, were so complex that even well trained people couldn’t manage them, and the entire field of interaction design was invented in response, as “human factors engineering.”
  3. Assistive tools do some of the low-level information work for us—like spell check in word processing software and proximity alerts in cars—harnessing algorithms, ubiquitous sensor networks, smart defaults, and machine learning. These tools came about decades after the silicon revolution.
  4. The fourth category is the emerging category, the new thing that bears some consideration and preparation. I have been thinking and presenting about this last category across the world:
    Agentive tools, which do more and more stuff on their own accord, like learning about their users, and are approaching the artificial intelligence that will ultimately, if you believe Vernor Vinge, eventually begin evolving beyond our ken.

"WIthin 30 years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

Read More

Make It Wearable

Recently I interviewed with the Vice/Intel collaboration The Creators Project, about sci-fi wearables and how Cooper approaches future design with its clients. And while my interview isn’t live yet, the Intel Make It Wearable Challenge it will be a part of was announced at CES yesterday. If you have an inventor’s mind, a love of wearable technology, and could use some of the US$1.3 million dollars to bring your idea to life, you’re going to want to see this.

The YotaPhone

This morning Dan Weissman interviewed me on NPR’s Marketplace about the viability of the 2-screen YotaPhone. (Americans will pronounce it like “Yoda” phone, and I suspect the semi-implied sci-fi connection will actually help.) The timeslot on NPR didn’t offer any time to expound on punditry, so here’s more on what I’m thinking.

The success of a new product in a mature market depends on many, many things. One of those is uniquely addressing an unmet need. Battery life is as yet one of those unmet needs. Until we solve some of those pesky constraints of physics and/or battery tech, we have to find ways to lengthen the utility of the phone within the constraints of existing power reserves. YotaPhone utilizes a second, e-Ink display on the “back” of the phone, and this helps battery life in two ways.

Wikimedia Creative Commons Attribution 3.0 Unported license.

But first, a paragraph of a primer: If you’re not familiar with the tech, e-Ink is an “electrophoretic display” where tiny transparent spheres can be turned black or white with a zap of a particular charge of electricity. (There’s a color version, but it’s more expensive and not as common.) The spheres are tiny enough to work as pixels, and that’s the basis of the display. It’s the thing driving Amazon Kindle and the Barnes & Noble Nook, among other products.

First: Sipping from the battery cup

One of the great things about e-Ink is that it uses very little electricity, especially compared to the full-color, backlit screens that are on most smartphones. At a 20% battery warning, then, you could turn the thing around and instead of having a handful of minutes left, you could conceivably have hours of phone time left, as long as you stick to the low-energy e-Ink display. That’s pretty cool.

Second: Life after battery death

The other crazy nifty thing about e-Ink is that once the display is refreshed, it uses no power. What that means is that you can design the phone to display critical information as its dying act, and the phone is still useful—It doesn’t become a brick. About to lose battery? Have it display the most common/recent phone numbers you access, so you can make use of some other phone. Have it display the directions you’re currently following so you can get there. Have it display your electronic boarding pass for your flight. In each of these mini-scenarios, YotaPhone can extend the utility of the phone for its users past the battery life. (That said, note that I haven’t been shipped one to play with or test, and don’t know if this functionality is built into the phone. I’m just sussing out opportunities.)

The YotaPhone is not the first to employ e-Ink. The Motorola Motofone (note the rhyming name) was released in 2006, and it featured an e-Ink display. But the e-Ink was its only display. Motofone asked its users to downgrade their whole experience in exchange for battery life, which is not a concern for most of the use of the phone. Contrast that with the YotaPhone, which says that you can have the premium sensory experience of full color and brightness as long as the battery reserves are flush. AND it gives users an option to downgrade their experience when that becomes necessary, and that’s new.

Also note that there are other design challenges to having two screens at once, but these are for a blog post longer than this one. (Somebody hire us to design for this little guy, and you can get a really, really good answer to that question. :)

Here at Cooper we design around user’s goals, and mobile phone users’ goals are actually to have mobile access anytime and anywhere, implying infinite power. And if someday battery capacity and/or decay are simply “solved,” the YotaPhone will seem very much like an antiquated, stopgap solution. But until then, it seems like a very good stopgap solution to me, one that I’d personally find useful, and I suspect the market will, too.

Cooper U On The Road

Are your products failing to resonate with users? Too many features creating bloat? Many of today’s products are driven by spreadsheets, technology constraints, and feature lists. They leave frustrated customers wanting more.

We believe a better approach to design focuses on the human needs first and technology second.

In Cooper’s Interaction Design training, we can help you envision, plan, and build products and services that are financially viable, technically feasible, and that your customers will love.

Beginning this December, Cooper is bringing our experience-based, hands-on training to sites around the world.

Where will we be going?

December 3-6 in Philadelphia, Pennsylvania

May 2014 in Berlin, Germany (If you want to be the first to know when we announce the dates, add you name here)

Read More

Designing the Future: Cooper in Berlin

Most software projects are built around the question “What are we going to do next?” But occasionally we’re asked to think farther out. Projects focused on the 5-10 year range are more about “Where are we headed?” and “What’s going to inspire people?” These are different questions to ask, and answering them changes the usual process of interaction design.

I’ve been thinking about these things for a while, and while at the MobX conference in Berlin I conducted a workshop where a group of 16 designers and strategists took a look at how you answer these questions.

So…how do you do it? The core of the matter is to understand what’s going to be different in the future you’re designing for.

These kinds of projects are less about “What’s next?” and more about “Where are we headed?” and “What’s going to inspire people?”

Read More

Augmented Experience


Photo via Reuters / Carlo Allegri

Let’s be honest: Google Glass looks pretty silly. Its appearance is out of time, futuristic, and obnoxiously so. And it’s out of place in daily life—a strange accessory with mysterious purpose, as if someone were to walk around all day with a skateboard on a leash.

But Glass also points to an intriguing future, one in which the line between using a digital device and simply going about daily life is removed. Whereas traditional spectacles have a corrective purpose to see reality more clearly, Glass offers a new category of lenses that promise to augment the reality we see. It opens a vast new frontier for the practice of interaction design that, like the Wild West, is full of lawlessness and danger and promise. And it is the UX community that will shape this landscape; we will determine it’s character, and the impact it will have on people’s lives.

A key question all this raises is: what “reality” is Glass augmenting? At the moment, being a Google product, the augmentation is designed to primarily target the urban economic and social spheres. Looking down the street through Glass, you may see restaurant store-fronts adorned with floating metadata describing the cuisine type and star-ratings by previous diners. Turning your head, an indicator points in the direction of the location of your next calendar appointment. Peering at a product on the shelf, prices for similar products are displayed for easy comparison. You’ll always know where you are, where you need to be, and what you’re looking at. The reality that Glass augments is a realm of people, objects, places of business, and locations. In other words, what can be expressed in a database and efficiently searched.

Toward a better future

At this point in the conversation, the story usually veers into the realm of exasperation and despair. Google Glass represents the death of spontaneity! It will systematize and computerize our lives! Organic experience will be lost! (And, most insidious of all) Google will monitor and monetize every saccade of our eyeball, every step we take!


“Big brother” from the film adaptation of Orwell’s 1984

Given the penchant for technologists to base business models on advertising and “big data” about their customers, it is not surprising that Google Glass can be seen as a kind of portable panopticon. But I think the truth that this device foreshadows is something potentially more benign, and almost certainly beneficial.

The dystopian narrative that depicts a society dominated by machines and ubiquitous surveillance is common, expressed through fiction, film, and even journalism, which tends to draw on the same sinister rhetoric. George Orwell’s 1984 describes the homogenization and suppression of culture through rules, systems, and constant surveillance. In a more recent popular expression, Pixar’s Wall-E imagines a future humanity composed of zombie-like innocents, shuttled along by automated chairs, staring feebly into digital screens, mobilized—and controlled—by machines. The plausibility of these futures is made even more vivid by the unfolding story of the depth of NSA surveillance.

To paraphrase a recent piece by Don Norman, it all depends on how we design and develop augmented reality applications. If we manage to create useful and utility-producing applications with wearable technologies like Google Glass, people will benefit. This seems at first more like a truism than truth. But the obviousness of the statement belies the underlying premise, which is that Google Glass and its future iterations are simply a canvas on which we can write the future of our “augmented” everyday experience. So let’s not leave it all up to Google, shall we?

Big ideas

Ideas for the positive future of augmented reality abound. Augmedix, for example, is a small company with a vision of Google Glass re-shaping the doctor-patient relationship. Increasingly, the burden of the new and fraught world of digital medical records is damaging this interaction. Doctors stare at screens instead of faces, they spend as much time clicking checkboxes and radio buttons as they do examining the bodies and listening to the voices of the people under their care. Augmented reality could turn this scenario on its head by allowing doctors to look at and converse with their patient while simultaneously accessing and transmitting important information through Glass. This will almost certainly lead to fewer errors, an increase in trust, and ultimately better health outcomes.


A doctor wears Glass with the Augmedix app.

Or consider William Gibson’s Spook Country, a novel in which a central character creates “locative art,” what you might call augmented reality sculpture. Imagine looking at a city fountain with your augmentation goggles and seeing a bloom of light and color where others see only water. That we could transform our physical landscape in a way that enhances its beauty—rather than simply enhancing its economic potential—is a stunning notion. Unlike 3D movie glasses or straight-up “virtual reality,” the idea of a physical/virtual mashup offers us a chance to experiment and play in realms previously only available to the world of screens and displays, without losing the notion of being present in a place, something virtual reality cannot avoid. We remain in the real world.

The design of augmented reality

The first attempts to harness the power of Glass-like technology will be “ports,” shoe-horning old functionality into a new form factor. Text and email messages will appear, caller ID will notify you of a phone call, the front-facing camera will take a picture or video on command. But none of these use cases address new goals. They simply make achieving old goals incrementally faster or more convenient. I don’t have to lift my phone and look at the screen to see a text message or know who’s calling. I don’t have to lift my camera and press a button to take a picture. The difference in my experience enabled by porting functionality from my phone to Glass is a difference of degree, not a difference in kind.

More interesting will be the forays into using augmented reality tech to solve previously unmet goals. Augmedix is a good example, because it bucks a trend toward less personal medicine and solves both a doctor and a patient goal. Locative art is similarly interesting, because it provides an entirely new artistic medium and way of experiencing that art. Mapping and orientation in a visually augmented world represents another fundamental change, because it bridges the gap between the abstract 2D map and the immediately actionable—a translation that currently happens in the human brain.

Go get ‘em

Augmented reality is in its infancy. Google Glass still faces some serious challenges, especially on the hardware front—miniaturizing the device and making it less obtrusive is necessary to make it less like pulling a skateboard on a leash everywhere you go. But the frontier for experience design this device opens up is huge, and doesn’t have to remain within the boundaries Google sets. Part of our challenge and calling as a UX community is to think deeply about what an augmented experience feels like, and how it shapes people’s lives. As you would with any user experience, let unmet user goals guide your design.

Your role in this revolution is just beginning.

Cooper Parlor: The Future of Television Meets the Future of Design

During our October Parlor, a packed room enjoyed presentations by Richard Bullwinkle, Head of US Television Innovation at Samsung, and Jeremy Toeman, CEO of the startup Dijit Media. In this edited, hour-long video, you will be guided through trends in media consumption, technological advances, and the evolution of show content and format, towards predictions of what is coming next in the realm of television and design.

“TV in the future will be any screen any location, holographic, 3D.” — Jeremy Toeman

From Richard Bullwinkle, you’ll find out what the highest rated TV episode in history is, and hear about “a seminal moment in television for nerds.” Jeremy Toeman shares what the viewing habits of children can tell us about our future, and ponders the pros and cons of “binge viewing,” now that downloaded series are available.

During the highlights from the brainstorming workshop that follows the two presentations, you’ll see brief excerpts from the teams’ presentations as they approach design problems in the TV domain such as accommodating family viewing with different needs and customizing cable services to individual desires and habits.

For more on this Parlor event visit our Storify page here

Find Out:

  • The #1 device for watching Netflix (not what you’d expect)
  • Why over 90 percent of all TV viewers use a second screen while watching TV
  • The lifecycle of a TV
  • What we’ll be viewing shows on in 3 years

What is the Cooper Parlor?
The Cooper Parlor is a gathering of designers and design-minded people to exchange ideas around a specific topic. We aim to cultivate conversation that instigates, surprises, entertains, and most importantly, broadens our community’s collective knowledge and perspective about the potential for design.

Join us for the next Cooper Parlor – Thursday, November 14 for a workshop on how to design your professional relationships. More details and registration here.

1 2 3 4 5 54