Inside Goal-Directed Design: A Conversation With Alan Cooper (Part 2)

We continue our conversation with Alan Cooper at Sue and Alan’s warm and welcoming ranch in Petaluma, CA, which, in addition to themselves, is home to sheep and chickens, a cat named Monkey, and a farmer who works the land.

Part 2 brings us up to present-day, and discussions around the applications and fundamentals of Goal-Directed Design that support its success at Cooper and beyond.

From Theory to Practice­­

Read More

Interaction14 – Is it Science, Art or something else?

While Friday’s talks seem to be quite level-headed compared Thursday’s design extravaganza, they weren’t any less provocative. Take a look at some of Friday‘s highlights (or sneak ahead to Saturday)

The De-Intellectualization of Design

Dan Rosenberg

Sketchnote by @ChrisNoessel

The De-Intellectualization of Design Big Idea:

Daniel Rosenberg, one of the old guard of Human-Computer Interaction, bemoaned the loss of a computer-science heavy approach to interaction design. He then shared his three-part antidote: Industry certification, employing Chief Design Officers, and better design education (read: computer and cognitive-science based). Guess which one of these was the audience’s “favorite”?

Hint:

Full description of The De-Intellectualization of Design here.

An excellent counterpoint to Dan’s observation was Irene Au’s early-morning mindfulness talk.

Read More

Interaction14 – Food, Comics, and the UI of Nature

Interaction14 is off to a blazing start, and man if it doesn’t sound like a kaleidoscope of designers, thought-leaders, and crazy beautiful ideas. There’s everything from interactive skateboard ramps to talks about principles of user experience design learned from cats.

Exactly what kind of “conference” is this?

This year Cooper sent over a troop of people for inspiration, elucidation and to capture some of the creative spark that only happens when you put hundreds of brilliant people in a big room for 4 days. In between workshops, talks, and happy hours, they’ve been slapping together some pretty stunning sketchnotes for us local folks. Here are notes from 4 of the talks that went down on Thursday. See sketchnotes from Friday and Saturday too!

Read More

Man’s Best App

How do you design an engaging and educational application that prepares a user with short-term memory loss for a lifestyle change?

For the November UX Boot Camp, designers, developers, and product managers from around the world teamed up to answer that very challenge for Canine Companions for Independence, the largest non-profit provider of service dogs.

Led by senior designers from Cooper, UX Boot Camp participants got their hands dirty learning new UX design techniques, collaborating with new teams, and working closely with stakeholders from Canine Companions.

From kickoff to design delivery, UX Boot Camp participants took a hands-on role in the generation, exploration, and synthesis of five distinct and fully-developed design concepts.

Read More

New Peers, Practices, and Perspectives

Takeaways from Cooper U in Philadelphia

A guest post by Cooper U alumni, Hanna Kang-Brown

As a career changer and the first UX Designer to be hired at my company, there’s a lot of self-learning I do on the job. Reading books and blogs have been essential to developing my UX process, but when I had the opportunity to attend Cooper U’s Interaction Design Training in Philadelphia this past December, I jumped at the chance. I wanted a week of hands-on training, and the opportunity to learn a thorough interaction design process with a group of other professionals. Some highlights from the week and my biggest takeaways are below.

My Biggest Takeaways

Clarifying Process
I was already familiar with the interaction design process, but the course helped deepen my understanding of it through hands on activities. I discovered ways in which I had cut corners in my design process and how I could have a better end product if I spent more time initially considering business stakeholder goals, personas and sketching out scenarios.

Speaking of Sketching
I’ve always been a reluctant sketcher because I never thought I was very good at it. We did a lot of sketching, from user profiles to storyboards and wireframes, and it helped me gain more confidence and a better appreciation for its usefulness as a lightweight prototyping method.

Read More

Designing the Future: Cooper in Berlin

Most software projects are built around the question “What are we going to do next?” But occasionally we’re asked to think farther out. Projects focused on the 5-10 year range are more about “Where are we headed?” and “What’s going to inspire people?” These are different questions to ask, and answering them changes the usual process of interaction design.

I’ve been thinking about these things for a while, and while at the MobX conference in Berlin I conducted a workshop where a group of 16 designers and strategists took a look at how you answer these questions.

So…how do you do it? The core of the matter is to understand what’s going to be different in the future you’re designing for.

These kinds of projects are less about “What’s next?” and more about “Where are we headed?” and “What’s going to inspire people?”

Read More

Augmented Experience


Photo via Reuters / Carlo Allegri

Let’s be honest: Google Glass looks pretty silly. Its appearance is out of time, futuristic, and obnoxiously so. And it’s out of place in daily life—a strange accessory with mysterious purpose, as if someone were to walk around all day with a skateboard on a leash.

But Glass also points to an intriguing future, one in which the line between using a digital device and simply going about daily life is removed. Whereas traditional spectacles have a corrective purpose to see reality more clearly, Glass offers a new category of lenses that promise to augment the reality we see. It opens a vast new frontier for the practice of interaction design that, like the Wild West, is full of lawlessness and danger and promise. And it is the UX community that will shape this landscape; we will determine it’s character, and the impact it will have on people’s lives.

A key question all this raises is: what “reality” is Glass augmenting? At the moment, being a Google product, the augmentation is designed to primarily target the urban economic and social spheres. Looking down the street through Glass, you may see restaurant store-fronts adorned with floating metadata describing the cuisine type and star-ratings by previous diners. Turning your head, an indicator points in the direction of the location of your next calendar appointment. Peering at a product on the shelf, prices for similar products are displayed for easy comparison. You’ll always know where you are, where you need to be, and what you’re looking at. The reality that Glass augments is a realm of people, objects, places of business, and locations. In other words, what can be expressed in a database and efficiently searched.

Toward a better future

At this point in the conversation, the story usually veers into the realm of exasperation and despair. Google Glass represents the death of spontaneity! It will systematize and computerize our lives! Organic experience will be lost! (And, most insidious of all) Google will monitor and monetize every saccade of our eyeball, every step we take!


“Big brother” from the film adaptation of Orwell’s 1984

Given the penchant for technologists to base business models on advertising and “big data” about their customers, it is not surprising that Google Glass can be seen as a kind of portable panopticon. But I think the truth that this device foreshadows is something potentially more benign, and almost certainly beneficial.

The dystopian narrative that depicts a society dominated by machines and ubiquitous surveillance is common, expressed through fiction, film, and even journalism, which tends to draw on the same sinister rhetoric. George Orwell’s 1984 describes the homogenization and suppression of culture through rules, systems, and constant surveillance. In a more recent popular expression, Pixar’s Wall-E imagines a future humanity composed of zombie-like innocents, shuttled along by automated chairs, staring feebly into digital screens, mobilized—and controlled—by machines. The plausibility of these futures is made even more vivid by the unfolding story of the depth of NSA surveillance.

To paraphrase a recent piece by Don Norman, it all depends on how we design and develop augmented reality applications. If we manage to create useful and utility-producing applications with wearable technologies like Google Glass, people will benefit. This seems at first more like a truism than truth. But the obviousness of the statement belies the underlying premise, which is that Google Glass and its future iterations are simply a canvas on which we can write the future of our “augmented” everyday experience. So let’s not leave it all up to Google, shall we?

Big ideas

Ideas for the positive future of augmented reality abound. Augmedix, for example, is a small company with a vision of Google Glass re-shaping the doctor-patient relationship. Increasingly, the burden of the new and fraught world of digital medical records is damaging this interaction. Doctors stare at screens instead of faces, they spend as much time clicking checkboxes and radio buttons as they do examining the bodies and listening to the voices of the people under their care. Augmented reality could turn this scenario on its head by allowing doctors to look at and converse with their patient while simultaneously accessing and transmitting important information through Glass. This will almost certainly lead to fewer errors, an increase in trust, and ultimately better health outcomes.


A doctor wears Glass with the Augmedix app.

Or consider William Gibson’s Spook Country, a novel in which a central character creates “locative art,” what you might call augmented reality sculpture. Imagine looking at a city fountain with your augmentation goggles and seeing a bloom of light and color where others see only water. That we could transform our physical landscape in a way that enhances its beauty—rather than simply enhancing its economic potential—is a stunning notion. Unlike 3D movie glasses or straight-up “virtual reality,” the idea of a physical/virtual mashup offers us a chance to experiment and play in realms previously only available to the world of screens and displays, without losing the notion of being present in a place, something virtual reality cannot avoid. We remain in the real world.

The design of augmented reality

The first attempts to harness the power of Glass-like technology will be “ports,” shoe-horning old functionality into a new form factor. Text and email messages will appear, caller ID will notify you of a phone call, the front-facing camera will take a picture or video on command. But none of these use cases address new goals. They simply make achieving old goals incrementally faster or more convenient. I don’t have to lift my phone and look at the screen to see a text message or know who’s calling. I don’t have to lift my camera and press a button to take a picture. The difference in my experience enabled by porting functionality from my phone to Glass is a difference of degree, not a difference in kind.

More interesting will be the forays into using augmented reality tech to solve previously unmet goals. Augmedix is a good example, because it bucks a trend toward less personal medicine and solves both a doctor and a patient goal. Locative art is similarly interesting, because it provides an entirely new artistic medium and way of experiencing that art. Mapping and orientation in a visually augmented world represents another fundamental change, because it bridges the gap between the abstract 2D map and the immediately actionable—a translation that currently happens in the human brain.

Go get ‘em

Augmented reality is in its infancy. Google Glass still faces some serious challenges, especially on the hardware front—miniaturizing the device and making it less obtrusive is necessary to make it less like pulling a skateboard on a leash everywhere you go. But the frontier for experience design this device opens up is huge, and doesn’t have to remain within the boundaries Google sets. Part of our challenge and calling as a UX community is to think deeply about what an augmented experience feels like, and how it shapes people’s lives. As you would with any user experience, let unmet user goals guide your design.

Your role in this revolution is just beginning.

Inside the IxDA 2014 Student Design Challenge

Photo by Jeremy Yuille

As co-chair of the 2014 IxDA Student Design Challenge with Dianna Miller, I recently had the pleasure of announcing this year’s theme, “Information for Life,”sponsored by the Bill and Melinda Gates Foundation.

Now in its fifth year, the IxDA Student Design Challenge (SDC) will run during the Interaction14 conference in Amsterdam, February 5-8, 2014. The competition brings together exceptional undergraduate and graduate students for both critical thinking and hands-on experiences over the course of the conference. Here, students have the opportunity to present their work in a way that shows, rather than tells, and it’s also a terrific venue for students to connect with colleagues, potential employers, funders, or new networks.

And I speak from experience — this competition holds a special place in my heart as I was a participant myself just a few years ago, in 2011.

Read More

Engaging Millennials – the UX Boot Camp: Wikipedia

As mobile devices become widely adopted, organizations are increasingly focused on designing engaging experiences across multiple platforms. At Cooper’s UX Boot Camp with Wikimedia, the non-profit took this a step further, challenging the class of designers to create a solution that facilitated content input and encouraged a new group of editors, specifically Millennial women, to contribute through mobile devices.

Read More

1 2 3 15