Planets Don’t Have Orbits

I heard an argument forwarded by Andrew Hinton way back in Dublin at the Inteaction12 conference. The short form goes like this: “Users don’t have goals.” (UDHG for short.) Being a big believer in Goal-Directed Design, I thought the argument to be self-evidently flawed, but since it came up again as a question from a student at my Cooper U class in Berlin, I feel I ought to address it.

Are there, in fact, goals?

Given just those four words, it seems like it might be about users actually not having goals. But of course, goals do exist. If they didn’t, why would anyone get out of bed in the morning? Or do work? Or make conference presentations? If we didn’t have goals, nothing would be happening in the world around us. But of course we do we do get out of bed. We do work. We write blog posts. All because we have reasons which—for clarity—we call goals. This example illustrates that what UDHG really means that most people don’t have explicit goals.

Read More

Barry the Blog Post…

…or, Why Silly Names Make Silly Personas, and 8 Tips to Getting Your Personas Named More Effectively

You’ve seen them before and unfortunately, you’ll see them again. Personas with names like Sarah the Security-Minded, Adam the Artist, Gloomy Gus, or Uzziah the Uppity Unix User. (Wait. You don’t have a persona named Uzziah?)

“What’s in a name? That which we call a rose / By any other name would smell as sweet.”
—Romeo & Juliet, Act II scene 2

A quick word about doing this sort of thing. Don’t. On one level, sure, it works. The alliteration helps you remember both the name and the salient characteristic that that persona is meant to embody. Who was Gus? Oh that’s right. The gloomy one.

Read More

OS Naught

For immediate release:
In a bold move, Apple has announced the business strategy for “OS Naught,” the next version of its popular operating system for Mac, iPhone, and iPod. In a press release delivered to industry insiders by conference call last evening, Apple CEO Timothy Cook explained that the OS, to be not released in Q3 2014, will require users to pay Apple as if a major update to the OS had been provided, but will actually contain no changes at all.

OS Naught logo

Read More

SXSWi recap

Gettin’ Bizzy with Pair Design

I went to SxSW Interactive to give a next-version talk about Pair Design with fellow Cooperista Suzy Thompson. It was much improved from the first draft, which was delivered in Amsterdam earlier this year at Interaction14, and the audience was a smart group of deeply engaged designers. (Shouts out to everyone who attended.)

Image by Senan Ryan

Read More

What you’ll like for dinner

Or: How persuasive design saved my lunch

While I was on route to Amsterdam for IXDA14, something struck me about the way the dinner options were presented to passengers. Here’s what was happening. The flight attendant delivered the menu in the same way to each row:

“Would you like barbeque chicken, beef strip, or vegetarian?”

I’ve been a vegetarian for twenty years now, and I’m a little sensitive to these moments. At first, my identity hackles were raised. “Hey!” I thought, “Why wouldn’t it be ‘Chicken, beef, and spicy red-beans-and-rice?’ We eat food, not a category of food! Those options should be presented as equals because we’re equals…Blah blah blah…ramble ramble…”

Fortunately, as is my habit, I caught myself mid rant, and tried to consider what was good about it. And sure enough, on reflection it’s the exact right way to present these options. Cooper’s been paying more attention to persuasive design of late, so let me explain, because that’s exactly what’s going on. The flight attendants are using choice architecture to keep vegetarians fed.

You see, one of the problems that vegetarians encounter when eating buffet-style with omnivores is that when there is a veggie option present, if it’s too good, there’s a risk that the omnivores will eat all the veggie stuff before we get to the front of the line, leaving us poor suckers with empty plates and sad-trombone bellies.

If the attendant presented “chicken, beef, and spicy red-beans-and-rice,” that’s exactly what’s at risk. An omnivore hearing that might think, “Hey, I’m a huge fan of spicy red beans and rice! Cajun spice is awesome. Bam! Let’s kick it up a notch!”

But when hearing a menu consisting of two easy-to-visualize options and the category of “vegetarian,” omnivores are more likely to be turned off by that third option. “Vegetarian? Screw that. I’m not a vegetarian. I like my meat heaping and with a side of meat. Meat me up, attendant, with the finest, meatiest meatings you have!” They’re less likely to ask after the actual contents of the vegetarian option, as they’re busy thinking about whether they’d like chicken or beef.

Meanwhile the vegetarians (even if their delicate identities are a bit bruised) are relieved when they hear that their needs have been considered. The unlucky ones in the very back of the plane (who failed to arrange a special meal in advance) might even get to eat.

descriptive option categorical option
omnivores Might choose :) Less likely to choose, still :)
vegetarians Less to eat :( More to eat :)

It’s not foolproof, of course, but I’ll bet if we could do a plane-by-plane comparison of “vegetarian” vs. “red beans and rice”, the categorical option would result in much more of everyone being happy. And that’s one of the powers of well-done choice architecture.

Your Flat Design is Convenient for Exactly One of Us

Illustration built on creative commons 2.0 Portrait of a Man by Flickr user and photographer Yuri Samoilov

I’m OK with fashion in interaction design. Honestly I am. It means that the field has grappled with and conquered most of the basics about how to survive, and now has the luxury of fretting over what scarf to wear this season. And I even think the flat design fashion of the day is kind of lovely to look at, a gorgeous thing for its designers’ portfolios.

But like corsets or foot binding, extreme fashions come at a cost that eventually loses out to practicality. Let me talk about this practicality for a moment.

In The Design of Everyday Things, Donald Norman distinguished between two ways that we know how to use a thing: information in the world, and information in your head.

Information in the world is stuff a user can look at to figure out. A map posted near the subway exit is information in the world. Reference it when you need it, ignore it when you don’t.

Information in the head is the set of declarative and procedural rules that users memorize about how to use a thing. That you need to keep your subway pass to exit out of the subway is information in your head. Woe be to the rider to throws their ticket away thinking they no longer need it.

For flat design purists, skeuomorphism is something akin to heresy, but it’s valuable because it belongs to this former category of affordance: it is information in the world. For certain, the faux-leather and brushed-aluminum interfaces that Apple had been pumping out were just taking things way too far in that direction, to a pointless mimicry of the real world. But a button that looks like a thing you can press with your finger is useful information for the user. It’s an affordance based on countless experiences of living in a world that contains physical buttons.

Pure, flat design doesn’t just get rid of dead weight. It shifts a burden. What once was information in the world, information borne by the interface, is now information in users’ heads, information borne by them. That in-head information is faster to access, but it does require that our users become responsible for learning it, remembering it, and keeping it up to date. Is the scroll direction up or down this release? Does swipe work here? Well I guess you can damned well try it and see. As an industry now draped in flat design, we’ve tidied up our workspace by cluttering our user’s brains with memorized instruction booklets for using our visually sparse, lovely designs.

So though the runways of interaction design are just gorgeous right now, I suspect there will be a user-sized sigh of relief when things begin to slip a bit back the other way (without the faux leather, Apple). Something to think about as we gear up our design thinking for the new year.

Summoning the Next Interface: Agentive Tools & SAUNa Technology

Cooper’s new Design the Future series of posts opens the door to how we think about and create the future of design, and how design can influence changing technologies. Join us in this new series as we explore the ideas behind agentive technology, and summon a metaphor to help guide us to the next interface.

Part 1: Toward a New UX

If we consider the evolution of technology—from thigh-bones-as-clubs to the coming singularity (when artificial intelligence leaves us biological things behind)—there are four supercategories of tools that influence the nature of what’s to come:

  1. Manual tools are things like rocks, plows, and hammers; well-formed masses of atoms that shape the forces we apply to them. Manual tools were the earliest tools.
  2. Powered tools are systems—like windmills and electrical machines—that set things in motion and let us manipulate the forces present in the system. Powered tools came after manual tools, and took a quantum leap with the age of electricity. They kept becoming more and more complex until World War II, when the most advanced technology of the time, military aircraft, were so complex that even well trained people couldn’t manage them, and the entire field of interaction design was invented in response, as “human factors engineering.”
  3. Assistive tools do some of the low-level information work for us—like spell check in word processing software and proximity alerts in cars—harnessing algorithms, ubiquitous sensor networks, smart defaults, and machine learning. These tools came about decades after the silicon revolution.
  4. The fourth category is the emerging category, the new thing that bears some consideration and preparation. I have been thinking and presenting about this last category across the world:
    Agentive tools, which do more and more stuff on their own accord, like learning about their users, and are approaching the artificial intelligence that will ultimately, if you believe Vernor Vinge, eventually begin evolving beyond our ken.

"WIthin 30 years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

Read More

Make It Wearable

Recently I interviewed with the Vice/Intel collaboration The Creators Project, about sci-fi wearables and how Cooper approaches future design with its clients. And while my interview isn’t live yet, the Intel Make It Wearable Challenge it will be a part of was announced at CES yesterday. If you have an inventor’s mind, a love of wearable technology, and could use some of the US$1.3 million dollars to bring your idea to life, you’re going to want to see this.

The YotaPhone

This morning Dan Weissman interviewed me on NPR’s Marketplace about the viability of the 2-screen YotaPhone. (Americans will pronounce it like “Yoda” phone, and I suspect the semi-implied sci-fi connection will actually help.) The timeslot on NPR didn’t offer any time to expound on punditry, so here’s more on what I’m thinking.

The success of a new product in a mature market depends on many, many things. One of those is uniquely addressing an unmet need. Battery life is as yet one of those unmet needs. Until we solve some of those pesky constraints of physics and/or battery tech, we have to find ways to lengthen the utility of the phone within the constraints of existing power reserves. YotaPhone utilizes a second, e-Ink display on the “back” of the phone, and this helps battery life in two ways.


Wikimedia Creative Commons Attribution 3.0 Unported license.

But first, a paragraph of a primer: If you’re not familiar with the tech, e-Ink is an “electrophoretic display” where tiny transparent spheres can be turned black or white with a zap of a particular charge of electricity. (There’s a color version, but it’s more expensive and not as common.) The spheres are tiny enough to work as pixels, and that’s the basis of the display. It’s the thing driving Amazon Kindle and the Barnes & Noble Nook, among other products.

First: Sipping from the battery cup

One of the great things about e-Ink is that it uses very little electricity, especially compared to the full-color, backlit screens that are on most smartphones. At a 20% battery warning, then, you could turn the thing around and instead of having a handful of minutes left, you could conceivably have hours of phone time left, as long as you stick to the low-energy e-Ink display. That’s pretty cool.

Second: Life after battery death

The other crazy nifty thing about e-Ink is that once the display is refreshed, it uses no power. What that means is that you can design the phone to display critical information as its dying act, and the phone is still useful—It doesn’t become a brick. About to lose battery? Have it display the most common/recent phone numbers you access, so you can make use of some other phone. Have it display the directions you’re currently following so you can get there. Have it display your electronic boarding pass for your flight. In each of these mini-scenarios, YotaPhone can extend the utility of the phone for its users past the battery life. (That said, note that I haven’t been shipped one to play with or test, and don’t know if this functionality is built into the phone. I’m just sussing out opportunities.)

The YotaPhone is not the first to employ e-Ink. The Motorola Motofone (note the rhyming name) was released in 2006, and it featured an e-Ink display. But the e-Ink was its only display. Motofone asked its users to downgrade their whole experience in exchange for battery life, which is not a concern for most of the use of the phone. Contrast that with the YotaPhone, which says that you can have the premium sensory experience of full color and brightness as long as the battery reserves are flush. AND it gives users an option to downgrade their experience when that becomes necessary, and that’s new.

Also note that there are other design challenges to having two screens at once, but these are for a blog post longer than this one. (Somebody hire us to design for this little guy, and you can get a really, really good answer to that question. :)

Here at Cooper we design around user’s goals, and mobile phone users’ goals are actually to have mobile access anytime and anywhere, implying infinite power. And if someday battery capacity and/or decay are simply “solved,” the YotaPhone will seem very much like an antiquated, stopgap solution. But until then, it seems like a very good stopgap solution to me, one that I’d personally find useful, and I suspect the market will, too.

Designing the Future: Cooper in Berlin

Most software projects are built around the question “What are we going to do next?” But occasionally we’re asked to think farther out. Projects focused on the 5-10 year range are more about “Where are we headed?” and “What’s going to inspire people?” These are different questions to ask, and answering them changes the usual process of interaction design.

I’ve been thinking about these things for a while, and while at the MobX conference in Berlin I conducted a workshop where a group of 16 designers and strategists took a look at how you answer these questions.

So…how do you do it? The core of the matter is to understand what’s going to be different in the future you’re designing for.

These kinds of projects are less about “What’s next?” and more about “Where are we headed?” and “What’s going to inspire people?”

Read More

1 2 3 7