Interaction14 – Is it Science, Art or something else?

While Friday’s talks seem to be quite level-headed compared Thursday’s design extravaganza, they weren’t any less provocative. Take a look at some of Friday‘s highlights (or sneak ahead to Saturday)

The De-Intellectualization of Design

Dan Rosenberg

Sketchnote by @ChrisNoessel

The De-Intellectualization of Design Big Idea:

Daniel Rosenberg, one of the old guard of Human-Computer Interaction, bemoaned the loss of a computer-science heavy approach to interaction design. He then shared his three-part antidote: Industry certification, employing Chief Design Officers, and better design education (read: computer and cognitive-science based). Guess which one of these was the audience’s “favorite”?

Hint:

Full description of The De-Intellectualization of Design here.

An excellent counterpoint to Dan’s observation was Irene Au’s early-morning mindfulness talk.

Read More

Interaction14 – Food, Comics, and the UI of Nature

Interaction14 is off to a blazing start, and man if it doesn’t sound like a kaleidoscope of designers, thought-leaders, and crazy beautiful ideas. There’s everything from interactive skateboard ramps to talks about principles of user experience design learned from cats.

Exactly what kind of “conference” is this?

This year Cooper sent over a troop of people for inspiration, elucidation and to capture some of the creative spark that only happens when you put hundreds of brilliant people in a big room for 4 days. In between workshops, talks, and happy hours, they’ve been slapping together some pretty stunning sketchnotes for us local folks. Here are notes from 4 of the talks that went down on Thursday. See sketchnotes from Friday and Saturday too!

Read More

Augmented Experience


Photo via Reuters / Carlo Allegri

Let’s be honest: Google Glass looks pretty silly. Its appearance is out of time, futuristic, and obnoxiously so. And it’s out of place in daily life—a strange accessory with mysterious purpose, as if someone were to walk around all day with a skateboard on a leash.

But Glass also points to an intriguing future, one in which the line between using a digital device and simply going about daily life is removed. Whereas traditional spectacles have a corrective purpose to see reality more clearly, Glass offers a new category of lenses that promise to augment the reality we see. It opens a vast new frontier for the practice of interaction design that, like the Wild West, is full of lawlessness and danger and promise. And it is the UX community that will shape this landscape; we will determine it’s character, and the impact it will have on people’s lives.

A key question all this raises is: what “reality” is Glass augmenting? At the moment, being a Google product, the augmentation is designed to primarily target the urban economic and social spheres. Looking down the street through Glass, you may see restaurant store-fronts adorned with floating metadata describing the cuisine type and star-ratings by previous diners. Turning your head, an indicator points in the direction of the location of your next calendar appointment. Peering at a product on the shelf, prices for similar products are displayed for easy comparison. You’ll always know where you are, where you need to be, and what you’re looking at. The reality that Glass augments is a realm of people, objects, places of business, and locations. In other words, what can be expressed in a database and efficiently searched.

Toward a better future

At this point in the conversation, the story usually veers into the realm of exasperation and despair. Google Glass represents the death of spontaneity! It will systematize and computerize our lives! Organic experience will be lost! (And, most insidious of all) Google will monitor and monetize every saccade of our eyeball, every step we take!


“Big brother” from the film adaptation of Orwell’s 1984

Given the penchant for technologists to base business models on advertising and “big data” about their customers, it is not surprising that Google Glass can be seen as a kind of portable panopticon. But I think the truth that this device foreshadows is something potentially more benign, and almost certainly beneficial.

The dystopian narrative that depicts a society dominated by machines and ubiquitous surveillance is common, expressed through fiction, film, and even journalism, which tends to draw on the same sinister rhetoric. George Orwell’s 1984 describes the homogenization and suppression of culture through rules, systems, and constant surveillance. In a more recent popular expression, Pixar’s Wall-E imagines a future humanity composed of zombie-like innocents, shuttled along by automated chairs, staring feebly into digital screens, mobilized—and controlled—by machines. The plausibility of these futures is made even more vivid by the unfolding story of the depth of NSA surveillance.

To paraphrase a recent piece by Don Norman, it all depends on how we design and develop augmented reality applications. If we manage to create useful and utility-producing applications with wearable technologies like Google Glass, people will benefit. This seems at first more like a truism than truth. But the obviousness of the statement belies the underlying premise, which is that Google Glass and its future iterations are simply a canvas on which we can write the future of our “augmented” everyday experience. So let’s not leave it all up to Google, shall we?

Big ideas

Ideas for the positive future of augmented reality abound. Augmedix, for example, is a small company with a vision of Google Glass re-shaping the doctor-patient relationship. Increasingly, the burden of the new and fraught world of digital medical records is damaging this interaction. Doctors stare at screens instead of faces, they spend as much time clicking checkboxes and radio buttons as they do examining the bodies and listening to the voices of the people under their care. Augmented reality could turn this scenario on its head by allowing doctors to look at and converse with their patient while simultaneously accessing and transmitting important information through Glass. This will almost certainly lead to fewer errors, an increase in trust, and ultimately better health outcomes.


A doctor wears Glass with the Augmedix app.

Or consider William Gibson’s Spook Country, a novel in which a central character creates “locative art,” what you might call augmented reality sculpture. Imagine looking at a city fountain with your augmentation goggles and seeing a bloom of light and color where others see only water. That we could transform our physical landscape in a way that enhances its beauty—rather than simply enhancing its economic potential—is a stunning notion. Unlike 3D movie glasses or straight-up “virtual reality,” the idea of a physical/virtual mashup offers us a chance to experiment and play in realms previously only available to the world of screens and displays, without losing the notion of being present in a place, something virtual reality cannot avoid. We remain in the real world.

The design of augmented reality

The first attempts to harness the power of Glass-like technology will be “ports,” shoe-horning old functionality into a new form factor. Text and email messages will appear, caller ID will notify you of a phone call, the front-facing camera will take a picture or video on command. But none of these use cases address new goals. They simply make achieving old goals incrementally faster or more convenient. I don’t have to lift my phone and look at the screen to see a text message or know who’s calling. I don’t have to lift my camera and press a button to take a picture. The difference in my experience enabled by porting functionality from my phone to Glass is a difference of degree, not a difference in kind.

More interesting will be the forays into using augmented reality tech to solve previously unmet goals. Augmedix is a good example, because it bucks a trend toward less personal medicine and solves both a doctor and a patient goal. Locative art is similarly interesting, because it provides an entirely new artistic medium and way of experiencing that art. Mapping and orientation in a visually augmented world represents another fundamental change, because it bridges the gap between the abstract 2D map and the immediately actionable—a translation that currently happens in the human brain.

Go get ‘em

Augmented reality is in its infancy. Google Glass still faces some serious challenges, especially on the hardware front—miniaturizing the device and making it less obtrusive is necessary to make it less like pulling a skateboard on a leash everywhere you go. But the frontier for experience design this device opens up is huge, and doesn’t have to remain within the boundaries Google sets. Part of our challenge and calling as a UX community is to think deeply about what an augmented experience feels like, and how it shapes people’s lives. As you would with any user experience, let unmet user goals guide your design.

Your role in this revolution is just beginning.

Television is dead. Or is it?

How the Internet, devices, and a new generation of viewers are redefining the “boob tube” of the future

Announcing the next Cooper Parlor: The Future of TV

When: Thursday, October 24th (Networking at 6, event starts at 6:30)
Moderated by: Richard Bullwinkle, Head of US Television Innovation at Samsung and Jeremy Toeman, CEO of the startup Dijit Media
Where: Cooper’s Studio, 85 2nd Street, 8th Floor, San Francisco
Cost: $10
Tickets

Once, television was simple. Families gathered religiously around a glowing box to watch the latest episode of “I love Lucy”. Fast-forward to today: the Internet enables a multitude of new viewing devices, and wildly different viewing habits have turned “television” on its head. In this Cooper Parlor, Richard Bullwinkle, Head of US Television Innovation at Samsung and Jeremy Toeman, CEO of the startup Dijit Media will share some curious trends in media consumption, technological advances, and the evolution of show content and format. Then, they’ll lead a brainstorming session to rethink the “television of the future” together.

Here are just a few curious factoids we’ll explore:

  • What is the #1 device for watching Netflix? The iPad? A laptop? It turns out it’s the Sony Playstation 3. Why do viewers flock to this device rather than the connected TV or an iPad?
  • Over 90% of all TV viewers use a second screen while watching TV. How might this impact the way we design the television experience and programming?
  • Can you guess why 70% of connected TVs in the US actually get connected to the internet, but only 30% do in Europe?

Join us as we discuss where TV is headed, and generate new ideas for what television can be!

What is the Cooper Parlor?

The Cooper Parlor is a gathering of designers and design-minded people to exchange ideas around a specific topic. We aim to cultivate conversation that instigates, surprises, entertains, and most importantly, broadens our community’s collective knowledge and perspective about the potential for design

Explore New Interaction Paradigms at UX Boot Camp: Wikimedia

Advance and apply your UX design skills to a meaningful real-world problem in this intensive, hands-on workshop

BootCamp_WEB

This September, join Wikimedia, Cooper, and design-thinkers from around the world as we find new ways to spread knowledge through mobile Wikipedia. In this four-day workshop, you’ll use new UX skills to make mobile content contribution more approachable, intuitive, and less reliant on traditional input methods like typing. If you’ve wanted an excuse to explore new interaction paradigms and stay ahead of the design pack, this is your chance. Best of all, you get to do all of that in the creative classroom setting of Alan and Sue Cooper’s 50-acre ranch in Petaluma, CA.

Register now: UX Boot Camp: WikimediaSeptember 17-20, Petaluma, CA

What’s in it for you?

  • Learn new interaction techniques and approaches under the guidance of industry leaders, including Alan Cooper
  • Learn how to think through a problem from both a design and business perspective, rather than blindly applying methods by rote.
  • Energize your practice and make new connections by working on a real-world challenge with peers from around the world.
  • Beef up your portfolio with a smart, new design concept
  • Pick up leadership and collaboration skills that will help you better navigate your work environment.

Read More

Design the Future of Radio

According to popular belief, radio is dead.

It’s not; it’s just taking a different form. Instead of families gathering around a radio to hear the nightly news, people are staying informed by listening to the “All Things Considered” podcast or following Fareed Zakaria on Twitter.

So how does a radio program make the transition from on-air to online and define their role as journalists in the digital age? And how can designers influence how radio content is generated, shared, and consumed?

In the June UX Boot Camp, through experimentation and exploration, participants will redesign how listeners interact with radio content. They’ll conduct this examination through a radio program you may have heard on your local public radio station: Marketplace Money.

American Public Media’s Marketplace Money is a weekly public radio program airing locally on KQED that looks at matters of personal finance with wit and wisdom. In this particular UX Boot Camp, students will work with American Public Media’s Marketplace Money to transform the experience of radio. They’ll come up with new tools and models for engagement that encourage multi-platform participation, crowd-sourced content, and an entirely new type of relationship between listeners and show host.

Sound like a challenge you want to solve? Save your spot now.
Read More

Austin in SXSW – The Digital Master (3 of 3)

Last week we spoke about the impending changes in our move from automated to intelligent services. Less UI and more AI might be a killer combination, bringing ease and delight to the complexities of the modern world. This week we’ll see how this type of continuous disruption is more killer than just an app.

The digital master of process

From Lean UX to continuous integration, our processes for generating new ideas are increasingly driven by analytics and usage stats. What allows us to navigate the murky waters of uncertain custom resonance is the intangible skill of vision making; visions that exist only in pixels. Rather than capturing value through physical objects, we’re gaining premium prices for services, and, increasingly, experiences. But there’s also a dark-side to the disruption spurred by the collusion of design and technology.

Read More

The Great UX Debate

Are designers responsible for the impact of their work upon human behavior?
Is it actually possible to create “connected” experiences across devices?
Do designers need to speed up, or do stakeholders need to slow down?

In January, Angel Anderson, Mikkel Michelsen, Robb Stevenson, Lou Lenzi, Donald Chestnut, and I poked and prodded at these topics during the Interaction 13 conference. About 500 people attended the debate, and they threw their own perspectives into the mix in the latter part of the conversation. Have a listen in the video below.

(And thanks to SapientNitro for the opportunity to meet such interesting people, expand my own perspective, and make use of what I learned on my high school debate team. Ha!)

1 2 3 4