The final day of Interaction14 was full of talks about collaboration and communication. Let’s kick off with a talk from our own Chris Noessel on a practice that is central to how Cooperistas design.
Sketchnote by @ChrisNoessel
The De-Intellectualization of Design Big Idea:
Daniel Rosenberg, one of the old guard of Human-Computer Interaction, bemoaned the loss of a computer-science heavy approach to interaction design. He then shared his three-part antidote: Industry certification, employing Chief Design Officers, and better design education (read: computer and cognitive-science based). Guess which one of these was the audience’s “favorite”?
The big question of certification: Who will certify the certifiers? #ixd14
— Jared Spool (@jmspool) February 7, 2014
Full description of The De-Intellectualization of Design here.
An excellent counterpoint to Dan’s observation was Irene Au’s early-morning mindfulness talk.
No, this is not that kind of jam. Think of a music jam, but instead of feeding off each other’s instruments to come up with interesting songs, we will feed off of each other’s ideas to come up with creative service solutions.
This year, Cooper is excited to host the SF Service Jam, March 7-9.
Let’s be honest: Google Glass looks pretty silly. Its appearance is out of time, futuristic, and obnoxiously so. And it’s out of place in daily life—a strange accessory with mysterious purpose, as if someone were to walk around all day with a skateboard on a leash.
But Glass also points to an intriguing future, one in which the line between using a digital device and simply going about daily life is removed. Whereas traditional spectacles have a corrective purpose to see reality more clearly, Glass offers a new category of lenses that promise to augment the reality we see. It opens a vast new frontier for the practice of interaction design that, like the Wild West, is full of lawlessness and danger and promise. And it is the UX community that will shape this landscape; we will determine it’s character, and the impact it will have on people’s lives.
A key question all this raises is: what “reality” is Glass augmenting? At the moment, being a Google product, the augmentation is designed to primarily target the urban economic and social spheres. Looking down the street through Glass, you may see restaurant store-fronts adorned with floating metadata describing the cuisine type and star-ratings by previous diners. Turning your head, an indicator points in the direction of the location of your next calendar appointment. Peering at a product on the shelf, prices for similar products are displayed for easy comparison. You’ll always know where you are, where you need to be, and what you’re looking at. The reality that Glass augments is a realm of people, objects, places of business, and locations. In other words, what can be expressed in a database and efficiently searched.
At this point in the conversation, the story usually veers into the realm of exasperation and despair. Google Glass represents the death of spontaneity! It will systematize and computerize our lives! Organic experience will be lost! (And, most insidious of all) Google will monitor and monetize every saccade of our eyeball, every step we take!
Given the penchant for technologists to base business models on advertising and “big data” about their customers, it is not surprising that Google Glass can be seen as a kind of portable panopticon. But I think the truth that this device foreshadows is something potentially more benign, and almost certainly beneficial.
The dystopian narrative that depicts a society dominated by machines and ubiquitous surveillance is common, expressed through fiction, film, and even journalism, which tends to draw on the same sinister rhetoric. George Orwell’s 1984 describes the homogenization and suppression of culture through rules, systems, and constant surveillance. In a more recent popular expression, Pixar’s Wall-E imagines a future humanity composed of zombie-like innocents, shuttled along by automated chairs, staring feebly into digital screens, mobilized—and controlled—by machines. The plausibility of these futures is made even more vivid by the unfolding story of the depth of NSA surveillance.
To paraphrase a recent piece by Don Norman, it all depends on how we design and develop augmented reality applications. If we manage to create useful and utility-producing applications with wearable technologies like Google Glass, people will benefit. This seems at first more like a truism than truth. But the obviousness of the statement belies the underlying premise, which is that Google Glass and its future iterations are simply a canvas on which we can write the future of our “augmented” everyday experience. So let’s not leave it all up to Google, shall we?
Ideas for the positive future of augmented reality abound. Augmedix, for example, is a small company with a vision of Google Glass re-shaping the doctor-patient relationship. Increasingly, the burden of the new and fraught world of digital medical records is damaging this interaction. Doctors stare at screens instead of faces, they spend as much time clicking checkboxes and radio buttons as they do examining the bodies and listening to the voices of the people under their care. Augmented reality could turn this scenario on its head by allowing doctors to look at and converse with their patient while simultaneously accessing and transmitting important information through Glass. This will almost certainly lead to fewer errors, an increase in trust, and ultimately better health outcomes.
Or consider William Gibson’s Spook Country, a novel in which a central character creates “locative art,” what you might call augmented reality sculpture. Imagine looking at a city fountain with your augmentation goggles and seeing a bloom of light and color where others see only water. That we could transform our physical landscape in a way that enhances its beauty—rather than simply enhancing its economic potential—is a stunning notion. Unlike 3D movie glasses or straight-up “virtual reality,” the idea of a physical/virtual mashup offers us a chance to experiment and play in realms previously only available to the world of screens and displays, without losing the notion of being present in a place, something virtual reality cannot avoid. We remain in the real world.
The first attempts to harness the power of Glass-like technology will be “ports,” shoe-horning old functionality into a new form factor. Text and email messages will appear, caller ID will notify you of a phone call, the front-facing camera will take a picture or video on command. But none of these use cases address new goals. They simply make achieving old goals incrementally faster or more convenient. I don’t have to lift my phone and look at the screen to see a text message or know who’s calling. I don’t have to lift my camera and press a button to take a picture. The difference in my experience enabled by porting functionality from my phone to Glass is a difference of degree, not a difference in kind.
More interesting will be the forays into using augmented reality tech to solve previously unmet goals. Augmedix is a good example, because it bucks a trend toward less personal medicine and solves both a doctor and a patient goal. Locative art is similarly interesting, because it provides an entirely new artistic medium and way of experiencing that art. Mapping and orientation in a visually augmented world represents another fundamental change, because it bridges the gap between the abstract 2D map and the immediately actionable—a translation that currently happens in the human brain.
Augmented reality is in its infancy. Google Glass still faces some serious challenges, especially on the hardware front—miniaturizing the device and making it less obtrusive is necessary to make it less like pulling a skateboard on a leash everywhere you go. But the frontier for experience design this device opens up is huge, and doesn’t have to remain within the boundaries Google sets. Part of our challenge and calling as a UX community is to think deeply about what an augmented experience feels like, and how it shapes people’s lives. As you would with any user experience, let unmet user goals guide your design.
Your role in this revolution is just beginning.
When: Thursday, October 24th (Networking at 6, event starts at 6:30)
Moderated by: Richard Bullwinkle, Head of US Television Innovation at Samsung and Jeremy Toeman, CEO of the startup Dijit Media
Where: Cooper’s Studio, 85 2nd Street, 8th Floor, San Francisco
Once, television was simple. Families gathered religiously around a glowing box to watch the latest episode of “I love Lucy”. Fast-forward to today: the Internet enables a multitude of new viewing devices, and wildly different viewing habits have turned “television” on its head. In this Cooper Parlor, Richard Bullwinkle, Head of US Television Innovation at Samsung and Jeremy Toeman, CEO of the startup Dijit Media will share some curious trends in media consumption, technological advances, and the evolution of show content and format. Then, they’ll lead a brainstorming session to rethink the “television of the future” together.
Join us as we discuss where TV is headed, and generate new ideas for what television can be!
The Cooper Parlor is a gathering of designers and design-minded people to exchange ideas around a specific topic. We aim to cultivate conversation that instigates, surprises, entertains, and most importantly, broadens our community’s collective knowledge and perspective about the potential for design
Cooper’s UX Boot Camp is a four-day immersion in our user experience design methodology for designers, developers, and product managers. The UX Boot Camp is also an opportunity for nonprofits to explore a challenge they are facing that can be helped by design and technology. Under the guidance of Cooper senior staff, UX Boot Camp students perform an in-depth field study surrounding that challenge, and the nonprofit receives multiple design explorations at no cost.
See what the students and stakeholders had to say about their experience
If you live in California or New York and you own a cell phone, you probably recently experienced the new Amber Alert capabilities. And by “capabilities,” I mean “the government’s newfound ability to disturb your sleep with non-actionable information.”
In California, the alert that set all this ablaze was in reference to a man, James Lee DiMaggio, who may or may not have killed his friend and her son, burned his house down with them in it, and fled with her daughter. Not that you would have known that from the Amber Alert: “Boulevard, CA AMBER Alert UPDATE: LIC/6WCU986 (CA) Blue Nissan Versa 4 door.” Certainly, Twitter has been all a-buzz about the alerts, and there are dozens of articles on the subject (my personal favorite headline: “Shaquille O’Neal: Yeah I Got That Amber Alert”).
Take a look inside Cooper’s June, 2013 UX Boot Camp with American Public Media’s Marketplace Money radio show, where students explored the next horizon of audio programming—a paradigm shift from broadcast to conversation-based platforms.
Students rolled up their sleeves to help the show respond to the trend away from traditional radio by finding the right mix of alternative distribution platforms. Marketplace Money came equally ready to take a radical departure from their current format in order to create a new model that redefines the roles of host, show, and audience in the digital age. To reach this goal, students focused on designing solutions that addressed three big challenges:
At the end of the four-day Boot Camp, student teams presented final pitches to Marketplace Money, and a panel of experienced Cooper designers offered feedback on their ideas and presentations.In the following excerpts from each day, you can test your own sensory preferences for receiving content as you see, hear and read how design ideas evolved at the Boot Camp, inspiring new relationships between people and radio.
At Sketchin we strongly believe that design can improve lives and foster social good. We first heard of Cooper’s UX Boot Camp when we visited Cooper in September, 2012, and we fell in love with their idea of using design to educate and foster social good by bridging design students with non-profits. This idea was conceived of and developed by Kendra Shimmell, the Managing Director at Cooper U, and it launched our determination to be part of a design revolution for social good.
Our first step was to create our own UX Boot Camp modeled after what we experienced at Cooper. So in May of 2013, together with Talent Garden Milano and Frontiers of Interaction, we organized the first Italian UX Boot Camp in Milan, modeled after the Cooper UX Boot Camp. Here is a look back at what we created and discovered in the process.
The other night I attended a presentation/panel discussion about visual science communication. Well, I should say I had a terrific dinner at Wexler’s first, then attended a presentation/panel discussion. These panels are better with a cocktail in you.
The event took place at swissnex. I think they like their name uncapitalized. I’m still a bit unclear about what swissnex is. The name struck me as delicious-sounding, like something you’d pair with Nutella in the morning. Swissnex. Your Toast’s Best Friend.
I read their annual report and sat in their event space, so I know that they are a non-profit, they are staffed by lots of competent Swiss people, and they like to underline text. I’m guessing it’s some kind of quasi-governmental Swiss cultural mission. Anyway, they host presentations about art and science, and do fun things like get Swiss kids to think about what 2023 will look like. All very wholesome.
The speakers at this event were a motley crew, and some are doing truly interesting work designing things to communicate science to the public. There was Michele Johnson, for example, a “public affairs officer” for the Kepler mission at NASA Ames. Kepler is a space telescope orbiting the sun, looking for Earth-like planets. She talked about how they manage to create a huge beautifully-rendered picture of a distant planet using only 6 pixels of image data. Obviously, it involves making a lot of assumptions. (I think the Kepler people are a tad jealous of Hubble, pumping out eye candy for the public, no need to emblazen “artist rendering” all over them like a Barry Bonds asterisk. I’d be jealous, too. It’s the difference between a webcam from 1995 and a telephoto DSLR. But they do impressive work, despite their constraints.)
Another interesting panelist was Ryan Wyatt, the director of the planetarium at the California Academy of Sciences. He showed us the visualization his team created for their EARTHQUAKE!!! exhibit. Pretty sweet. And kind of mind-bending, because they’re designing this uber-animation for the domed ceiling of the planetarium, projected with at least a half dozen overlapping light systems. They are an active and talented bunch, it seems. Six full-time staff work on science visualizations at the museum. (Edit: over-estimated the size of the team. Thanks, Ryan!)
There was also Joe Hanson, who does a PBS Youtube show called “It’s OK to Be Smart.” His main point: that creating engaging video content (about science, or drunk make-up tips, or whatever) is easy, can be done on a shoestring budget, and please please please release your stuff to Creative Commons so that other people can re-mix and re-use for free.
It ended late, so I wasn’t in the mood to hob-nob too much. Plus that cocktail was beginning to weigh on my consciousness. But I left with a feeling that the problems the UX community face aren’t so different from our compatriots doing science visualization. Sure, science viz is less concerned with usability and affordance (museum exhibits being a big exception). But we both have to synthesize input from subject matter experts. We both juggle the demands of clients and users and resources. We both strive to create artifacts that engage our users, drawing them in, immersing them in an experience, distilling complexity into its essential pieces. Our two communities, seemingly distinct, have a lot to learn from each other.