Augmented Experience


Photo via Reuters / Carlo Allegri

Let’s be honest: Google Glass looks pretty silly. Its appearance is out of time, futuristic, and obnoxiously so. And it’s out of place in daily life—a strange accessory with mysterious purpose, as if someone were to walk around all day with a skateboard on a leash.

But Glass also points to an intriguing future, one in which the line between using a digital device and simply going about daily life is removed. Whereas traditional spectacles have a corrective purpose to see reality more clearly, Glass offers a new category of lenses that promise to augment the reality we see. It opens a vast new frontier for the practice of interaction design that, like the Wild West, is full of lawlessness and danger and promise. And it is the UX community that will shape this landscape; we will determine it’s character, and the impact it will have on people’s lives.

A key question all this raises is: what “reality” is Glass augmenting? At the moment, being a Google product, the augmentation is designed to primarily target the urban economic and social spheres. Looking down the street through Glass, you may see restaurant store-fronts adorned with floating metadata describing the cuisine type and star-ratings by previous diners. Turning your head, an indicator points in the direction of the location of your next calendar appointment. Peering at a product on the shelf, prices for similar products are displayed for easy comparison. You’ll always know where you are, where you need to be, and what you’re looking at. The reality that Glass augments is a realm of people, objects, places of business, and locations. In other words, what can be expressed in a database and efficiently searched.

Toward a better future

At this point in the conversation, the story usually veers into the realm of exasperation and despair. Google Glass represents the death of spontaneity! It will systematize and computerize our lives! Organic experience will be lost! (And, most insidious of all) Google will monitor and monetize every saccade of our eyeball, every step we take!


“Big brother” from the film adaptation of Orwell’s 1984

Given the penchant for technologists to base business models on advertising and “big data” about their customers, it is not surprising that Google Glass can be seen as a kind of portable panopticon. But I think the truth that this device foreshadows is something potentially more benign, and almost certainly beneficial.

The dystopian narrative that depicts a society dominated by machines and ubiquitous surveillance is common, expressed through fiction, film, and even journalism, which tends to draw on the same sinister rhetoric. George Orwell’s 1984 describes the homogenization and suppression of culture through rules, systems, and constant surveillance. In a more recent popular expression, Pixar’s Wall-E imagines a future humanity composed of zombie-like innocents, shuttled along by automated chairs, staring feebly into digital screens, mobilized—and controlled—by machines. The plausibility of these futures is made even more vivid by the unfolding story of the depth of NSA surveillance.

To paraphrase a recent piece by Don Norman, it all depends on how we design and develop augmented reality applications. If we manage to create useful and utility-producing applications with wearable technologies like Google Glass, people will benefit. This seems at first more like a truism than truth. But the obviousness of the statement belies the underlying premise, which is that Google Glass and its future iterations are simply a canvas on which we can write the future of our “augmented” everyday experience. So let’s not leave it all up to Google, shall we?

Big ideas

Ideas for the positive future of augmented reality abound. Augmedix, for example, is a small company with a vision of Google Glass re-shaping the doctor-patient relationship. Increasingly, the burden of the new and fraught world of digital medical records is damaging this interaction. Doctors stare at screens instead of faces, they spend as much time clicking checkboxes and radio buttons as they do examining the bodies and listening to the voices of the people under their care. Augmented reality could turn this scenario on its head by allowing doctors to look at and converse with their patient while simultaneously accessing and transmitting important information through Glass. This will almost certainly lead to fewer errors, an increase in trust, and ultimately better health outcomes.


A doctor wears Glass with the Augmedix app.

Or consider William Gibson’s Spook Country, a novel in which a central character creates “locative art,” what you might call augmented reality sculpture. Imagine looking at a city fountain with your augmentation goggles and seeing a bloom of light and color where others see only water. That we could transform our physical landscape in a way that enhances its beauty—rather than simply enhancing its economic potential—is a stunning notion. Unlike 3D movie glasses or straight-up “virtual reality,” the idea of a physical/virtual mashup offers us a chance to experiment and play in realms previously only available to the world of screens and displays, without losing the notion of being present in a place, something virtual reality cannot avoid. We remain in the real world.

The design of augmented reality

The first attempts to harness the power of Glass-like technology will be “ports,” shoe-horning old functionality into a new form factor. Text and email messages will appear, caller ID will notify you of a phone call, the front-facing camera will take a picture or video on command. But none of these use cases address new goals. They simply make achieving old goals incrementally faster or more convenient. I don’t have to lift my phone and look at the screen to see a text message or know who’s calling. I don’t have to lift my camera and press a button to take a picture. The difference in my experience enabled by porting functionality from my phone to Glass is a difference of degree, not a difference in kind.

More interesting will be the forays into using augmented reality tech to solve previously unmet goals. Augmedix is a good example, because it bucks a trend toward less personal medicine and solves both a doctor and a patient goal. Locative art is similarly interesting, because it provides an entirely new artistic medium and way of experiencing that art. Mapping and orientation in a visually augmented world represents another fundamental change, because it bridges the gap between the abstract 2D map and the immediately actionable—a translation that currently happens in the human brain.

Go get ‘em

Augmented reality is in its infancy. Google Glass still faces some serious challenges, especially on the hardware front—miniaturizing the device and making it less obtrusive is necessary to make it less like pulling a skateboard on a leash everywhere you go. But the frontier for experience design this device opens up is huge, and doesn’t have to remain within the boundaries Google sets. Part of our challenge and calling as a UX community is to think deeply about what an augmented experience feels like, and how it shapes people’s lives. As you would with any user experience, let unmet user goals guide your design.

Your role in this revolution is just beginning.

Can illegal networks of zombie computers be a force for… good?

Whenever a major website has significant downtime, people start to wonder: is it intentional? Is Anonymous behind it? Or a secretive group of enemy government hackers?

It’s a reasonable assumption, as it turns out that DDoS—distributed denial of service—attacks are relatively easy to pull off these days. To accomplish it, a ne’er-do-well need only harness thousands of “zombie” computers, point them toward their intended target, and harass the web servers with so much traffic that they are overwhelmed. It’s a temporary effect, but can cause severe economic damage.

It used to be that coordinating such an attack required a great deal of skill. A criminal needed to first infiltrate those thousands of machines using some kind of trojan horse or other malware. To harness their collective power, they would stitch together a “botnet” by designing a way to control them all remotely by issuing them commands, then bend them all to whatever nefarious purpose they have in mind. (Besides DDoS attacks, botnets also send a lot of spam.) Today, however, pre-configured botnets can be rented for a pittance. One source claims to rent a 10,000-strong network of zombie machines for $200.

This got me wondering: why not rent a botnet, and use it for good?

By Tom-b (Own work) [CC-BY-SA-3.0], via Wikimedia Commons

Read More

Telling visual stories for science

The other night I attended a presentation/panel discussion about visual science communication. Well, I should say I had a terrific dinner at Wexler’s first, then attended a presentation/panel discussion. These panels are better with a cocktail in you.

The event took place at swissnex. I think they like their name uncapitalized. I’m still a bit unclear about what swissnex is. The name struck me as delicious-sounding, like something you’d pair with Nutella in the morning. Swissnex. Your Toast’s Best Friend.

I read their annual report and sat in their event space, so I know that they are a non-profit, they are staffed by lots of competent Swiss people, and they like to underline text. I’m guessing it’s some kind of quasi-governmental Swiss cultural mission. Anyway, they host presentations about art and science, and do fun things like get Swiss kids to think about what 2023 will look like. All very wholesome.

The speakers at this event were a motley crew, and some are doing truly interesting work designing things to communicate science to the public. There was Michele Johnson, for example, a “public affairs officer” for the Kepler mission at NASA Ames. Kepler is a space telescope orbiting the sun, looking for Earth-like planets. She talked about how they manage to create a huge beautifully-rendered picture of a distant planet using only 6 pixels of image data. Obviously, it involves making a lot of assumptions. (I think the Kepler people are a tad jealous of Hubble, pumping out eye candy for the public, no need to emblazen “artist rendering” all over them like a Barry Bonds asterisk. I’d be jealous, too. It’s the difference between a webcam from 1995 and a telephoto DSLR. But they do impressive work, despite their constraints.)

Another interesting panelist was Ryan Wyatt, the director of the planetarium at the California Academy of Sciences. He showed us the visualization his team created for their EARTHQUAKE!!! exhibit. Pretty sweet. And kind of mind-bending, because they’re designing this uber-animation for the domed ceiling of the planetarium, projected with at least a half dozen overlapping light systems. They are an active and talented bunch, it seems. Six full-time staff work on science visualizations at the museum. (Edit: over-estimated the size of the team. Thanks, Ryan!)

There was also Joe Hanson, who does a PBS Youtube show called “It’s OK to Be Smart.” His main point: that creating engaging video content (about science, or drunk make-up tips, or whatever) is easy, can be done on a shoestring budget, and please please please release your stuff to Creative Commons so that other people can re-mix and re-use for free.

It ended late, so I wasn’t in the mood to hob-nob too much. Plus that cocktail was beginning to weigh on my consciousness. But I left with a feeling that the problems the UX community face aren’t so different from our compatriots doing science visualization. Sure, science viz is less concerned with usability and affordance (museum exhibits being a big exception). But we both have to synthesize input from subject matter experts. We both juggle the demands of clients and users and resources. We both strive to create artifacts that engage our users, drawing them in, immersing them in an experience, distilling complexity into its essential pieces. Our two communities, seemingly distinct, have a lot to learn from each other.

Ask the right questions, solve the right problems

UX design is fundamentally about solving problems. We call a design “good” if it solves a problem elegantly, cheaply, usably, and so on. I think it’s fair to say, though, that too little attention is paid to which problems need solving, which questions need answering. The interaction design practicum at Cooper U offers a slew of tools for solving design problems, but the really eye-opening parts of the course taught me to back up a step and think about how to find the right problem in the first place.

Over-focusing on design solutions is natural. Solving the problem is the fun part of the job, after all. Smart workflows, elegant wireframes, typographical brilliance, beautiful gradients, and clever CSS are the exciting materializations of great design thinking. Talking to people outside the organization is time-consuming and expensive, so intuition often substitutes for user research. But, as Cooper U hammered home, successful user-centered design has to mean more than relying on stale or imagined assumptions about the people to whom our design solutions ultimately matter.

A lot of design begins with someone asking “What do users want?” The temptation is then to go ask some users what they want. This frequently leads in the wrong direction; too often people don’t know how to articulate what they want. A “disruptive” product is precisely that: something people didn’t realize they wanted until they saw it, disrupting what they imagine to be possible.

A better question is to ask is: “What do users do?” This is where user research comes in. Users have ingrained mental models, habits, rituals, and idiosyncrasies. Finding the patterns is key to finding the right problems to solve.

At Cooper U, we practiced observing and describing and interviewing and categorizing users. Here’s what I learned: useful user research is difficult, draining, and requires practice. You can’t just wing it. It takes planning, persistence, and the right methods.

In these past months, I’ve done real-world user research for a number of design projects. Every researcher develops their own style, but the good ones are tireless recorders and observers. They let the real world they witness seep in and reveal the behavioral patterns in real people. Only then do they try to figure out what users want, and crystalize these patterns and desires into personas. They ask the right questions, then solve the right problems.

Get some

Stop designing before asking the right questions. Design things users want. If you want to up your user research game and bring new user-centered design skills to your practice and organization, check out one of our upcoming Cooper U courses.