At Cooper’s UX Boot Camp, held between March 25th and March 28th at Monkey Ranch in Petaluma, CA, Fair Trade USA looked to participants for ideas around how to raise awareness of their mission and inspire consumers to purchase Fair Trade products.
Fair Trade USA enables sustainable development and community empowerment by cultivating a more equitable global trade model through certifying and promoting Fair Trade products. Their work benefits everyone from farmers and workers to consumers, industry and the environment, and yet only 20-30 percent of Americans even know what Fair Trade means. Why? The issues are complex, but as students dug into this problem they identified key factors behind this disconnect, including a lack of brand awareness of the business case for Fair Trade, low brand adoption, and limited Fair Trade product presence in stores.
From those explorations, the following goals emerged:
Motivate and inspire brands to adopt and evangelize Fair Trade practices.
Put more Fair Trade products in front of consumers.
Build “pop culture” awareness of Fair Trade to get more brands to buy into the movement.
To get there, student teams went beyond the initial concept of a website redesign and took on the bigger questions that lead to business transformation. For a look behind the scenes as the teams approached this challenge, check out the following video filmed during the Fair Trade USA Boot Camp, and read more to take a look at the Fair Trade USA ecosystem model and what the students came up with in the pitch decks that follow.
We are always on the look out for posts, articles, and other pieces authored by Cooper U Alumni. The stories that they tell are often an insightful glimpse into what lessons stood out to participants. We were delighted to find this blog post by Meg Davis (Extractable) that calls out so many of the tips and meaningful moments from Design Leadership's curriculum. Take a look...
I recently had the pleasure of attending a two-day event hosted by San Francisco agency Cooper about design leadership. This discussion-based event covered great material about techniques for leadership and communication in the design industry. I would highly recommend this event to other design professionals who want to improve the effectiveness of their work.
Five insights stuck with me, and I’ve included concrete tips about how to live out these insights practically.
Be as intentional with people as you are with your work.
As user experience designers, we love researching people to find out their motivations for using web and digital products. We spend hours of primary research during each project, watching people use products in context of their work. However, we don’t put this level of attention towards our co-workers who we work alongside. If we took time to really understand and build empathy for the people we work with every day, we would understand what kind of pressures they face, what rewards them, what they need to make a decision, and what they need from us in order to trust us. If we can understand each team member’s skills and motivations, then we can leverage them to work better together. As the Cooper U team so beautifully put it, “Sometimes you need to slow down to speed up.”
Tip: At the start of each project, talk to each team member about his or her intentions for the project and figure out ways to support them, even in small ways.
Tip: Before going into meetings with your peers, understand and anticipate what they will need to feel engaged during the meeting and feel buy-in with respect to the work.
What if instead of designing explicit interfaces we aimed instead at eliminating them altogether? If instead of adding a screen we found ways to remove it? Wouldn’t the best user interface be the one that requires nothing of the user?
No UI, proposed here on the Journal by Cooper’s Golden Krishna, is interesting, provocative, and deeply flawed. Golden argues that no interface is best, and then explores ways strip it out. But this begins with a designer’s goal rather than the users’. First identify where users are helped or hindered by explicit interfaces: When hindered, eliminate the UI. But there's many times when a UI really helps. When it does, make it great.
But where to start? Three questions can help you evaluate the user’s relationship with a task, product or service.
For any particular interface in the system:
Does the user want or need control?
Does the user get value from doing the work themselves?
Does the user outperform technology?
If you can answer “no” to every one of these questions, then put in the effort to eliminate the interface. If you answer “yes” to any one of these you should focus on improving the interface so that it supports the user better. If it’s not unanimously “yes” or “no” carefully consider how design can meet the conflicting needs. Get to know your users well. Design a solution that’s as sophisticated and nuanced as their situation calls for.
Each of these questions helps you examine the relationship of the user with the technology. These are massively important considerations when advocating for the elimination of the interface; a product without some form of interface effectively doesn’t exist for the user. The UI is the embodiment of your relationship with it. No interface, no relationship. Sometimes this is exactly what you want. But people also value products because they bring something into their lives, or because they remove some obstacle from it. Every tool, game, or service gives people power, information, peace, pleasure, or possibility. Interactions with these should be awesome, helpful, supportive, effortless; and for this we often need a really great UI.
On a recent flight from Amsterdam to Houston, I turned on the “moving-map system” in the in-seat entertainment system and was surprised to see that though we were halfway through the 10-hour flight, the map made it look like we were minutes from landing in Texas. Sure, I had dozed the delightful doze of the jet lagged international flight, but had I actually passed out? Or had time slipped by that quickly? Shouldn’t we be somewhere over Greenland? That looks about halfway.
Then I realized that it was the map itself that was to blame. The arc was being true to the plane’s path across the map, but with a map as distorted as this one, it was bound to be confusing.
Like most cool things, this gets nerdy quick. See, when you try and take something that’s pretty much a sphere (the Earth) and fit it to a rectangle (the in-flight entertainment screen) you’re going to run into some deformation. There are many, many ways to crack this mathematical nut (the awesome site Radical Cartography lists 30) and each optimizes some things at the costs of others.
The designer of this system had chosen to use the familiar Plate Carrée projection of the Earth. It’s ancient, and quite familiar to travelers. It’s used everywhere. So certainly, it optimizes for initial use. At a simple glance, the traveler knows what he’s looking at.
But this projection, in forcing the longitude and latitude lines into tidy squares, severely squashes areas that are closer to the poles. The result is that—unless you’re traveling from one point on the equator to another—an actual straight line across the surface of the planet will appear on this map as oddly arced. What’s worse is that the arc won’t mean the same thing across its length. Nearer the poles it will be stretched and closer to the equator it will be squished, resulting in the weird, jarring experience of watching the plane zip to Ontario, and then crawl to the Gulf Coast.
Whew. That was close. As every year, there’s a risk that we’ll be overrun with with zombies, werewolves, vampires, sasquatch(es), and mummies before the veil that separates the world seals tight for another year. But a quick tally around the Cooper offices shows that here, at least, we all made it. Hope all our readers are yet un-undead as well. While we’re taking this breather, we’re called to reflect a bit on this year’s interaction design for monsters.
Monsters are extreme personas
One of the power of personas is that they encourage designers to be more extrospective, to stop designing for themselves. Monsters as personas push this to an extreme. It’s rare that you’ll ever be designing technology for humans who can’t perceive anything, can’t speak any modern language, live nearly eternally, shape shift, etc. But each of these outrageous constraints challenges designers to create a design that could accommodate it, and often ends up driving what’s new or special about the design.
But then again...
Some of the constraints of the monsters are human constraints writ large (or writ strangely).
Juan wasn’t a useful person in and of himself, but his users exercised flash mob requirements of real-time activation and coordination. Are there flash mob lessons to learn?
Emily was fighting a zombie infection, but real-world humans are fighting infections all the time. Is there something we can use for medical interfaces?
Metanipsah has no modern language and a mechanical mental model, but most of us have mobile wayfinding needs at one time or another.
The Vampire Capitalists behind Genotone took the long view, reminding us of burgeoning post-growth business models.
So maybe they’re great personas after all, guiding us to great design because they’re extreme, just like the canonical OXO Good Grips story, where designing for people with arthritis led the design teams to create products with universal appeal.
It was a full house of design thinkers with a Silicon Valley twist. Serial Entrepreneurs. Voice-activation specialists. Tech wunderkinds. An evening of passionate discussion about the future of interfaces.
“I felt like I was back in college — the good parts of college,” Strava designer Peter Duyan told me afterwards.
Peter was crammed in this room of college-like discourse — designed for 35, now seating over 60 — because of a blog post I wrote that went unexpectedly viral.
I had proposed that “the best interface is no interface.” That we should focus on experiences and problems, not on screens. That UX is not UI. Two days after it was published, it was shared more on Twitter than anything ever written on The Cooper Journal, Core77 or Designer Observer. A week later, a Breaking Development podcast. Two weeks, a popular Branch discussion. A month, top ten on Hacker News again. All surprising, flattering, amazing. And that evening, a conversation.
In the spirit of discourse, special guest and design legend Don Norman started the evening with an entertaining retort: “They made a big mistake when they invited me.” (Watch it above, or listen to it here. And if you haven’t read his books, you should).
Then, in 1984, Apple adopted Xerox PARC’s WIMP — window, icon, menu, pointer — and took us a galactic leap forward away from those horrifying command lines of DOS, and into a world of graphical user interfaces.
We were converted. And a decade later, when we could touch the Palm Pilot instead of dragging a mouse, we were even more impressed. But today, our love for the digital interface has gotten out-of-control.
It’s become the answer to every design problem.
How do you make a better car? Slap an interface in it.
A giant touchscreen with news and weather is exactly what’s missing from my hotel stay. (Source: IDEO)
Creative minds in technology should focus on solving problems. Not just make interfaces.
As Donald Norman said in 1990, “The real problem with the interface is that it is an interface. Interfaces get in the way. I don’t want to focus my energies on an interface. I want to focus on the job…I don’t want to think of myself as using a computer, I want to think of myself as doing my job.”
It’s time for us to move beyond screen-based thinking. Because when we think in screens, we design based upon a model that is inherently unnatural, inhumane, and has diminishing returns. It requires a great deal of talent, money and time to make these systems somewhat usable, and after all that effort, the software can sadly, only truly improve with a major overhaul.
There is a better path: No UI. A design methodology that aims to produce a radically simple technological future without digital interfaces. Following three simple principles, we can design smarter, more useful systems that make our lives better.
Principle 1: Eliminate interfaces to embrace natural processes.
Severalcar companies have recently created smartphone apps that allow drivers to unlock their car doors. Generally, the unlocking feature plays out like this:
A driver approaches her car.
Takes her smartphone out of her purse.
Turns her phone on.
Slides to unlock her phone.
Enters her passcode into her phone.
Swipes through a sea of icons, trying to find the app.
Taps the desired app icon.
Waits for the app to load.
Looks at the app, and tries figure out (or remember) how it works.
Makes a best guess about which menu item to hit to unlock doors and taps that item.
Taps a button to unlock the doors.
The car doors unlock.
She opens her car door.
Thirteen steps later, she can enter her car.
The app forces the driver to use her phone. She has to learn a new interface. And the experience is designed around the flow of the computer, not the flow of a person.
If we eliminate the UI, we’re left with only three, natural steps:
A driver approaches her car.
The car doors unlock.
She opens her car door.
Anything beyond these three steps should be frowned upon.
Seem crazy? Well, this was solved by Mercedes-Benz in 1999. Please watch the first 22 seconds of this incredibly smart (but rather unsexy) demonstration:
By reframing design constraints from the resolution of the iPhone to our natural course of actions, Mercedes created an incredibly intuitive, and wonderfully elegant car entry. The car senses that the key is nearby, and the door opens without any extra work.
That’s good design thinking. After all, especially when designing around common tasks, the best interface is no interface.
A few companies, including Google, have built smartphone apps that allow customers to pay merchants using NFC. Here’s the flow:
A shopper enters a store.
Orders a sandwich.
Takes his smartphone out of his pocket.
Turns his phone on.
Slides to unlock.
Enters his passcode into the phone.
Swipes through a sea of icons, trying to find the Google Wallet app.
Taps the desired app icon.
Waits for the app to load.
Looks at the app, and tries figure out (or remember) how it works.
Makes a best guess about which menu item to hit to to reveal his credit cards linked to Google Wallet. In this case, “payment types.”
Swipes to find the credit card his would like to use.
Taps that desired credit card.
Finds the NFC receiver near the cash register.
Taps his smartphone to the NFC receiver to pay.
Sits down and eats his sandwich.
If we eliminate the UI, we’re again left with only three, natural steps:
A shopper enters a store.
Orders a sandwich.
Sits down and eats his sandwich.
Asking for an item to a person behind a register is a natural interaction. And that’s all it takes to pay with Auto Tab in Pay with Square. Start at 2:08:
Auto Tab in Pay with Square does require some UI to get started. But by using location awareness behind-the-scenes, the customer doesn’t have to deal with UI, and can simply pursue his natural course of actions.
As Jack Dorsey of Square explains above, “NFC is another thing you have to do. It’s another action you have to take. And it’s not the most human action to wave a device around another device and wait for a beep. It just doesn’t feel right.”
Principle 2: Leverage computers instead of catering to them.
No UI is about machines helping us, instead of us adapting for computers.
With UI, we are faced with counterintuitive interaction methods that are tailored to the needs of a computer. We are forced to navigate complex databases to obtain simple information. We are required to memorize countless passwords with rules like one capital letter, two numbers and a punctuation mark. And most importantly, we’re constantly pulled away from the stuff we actually want to be doing.
A Windows 2000 password requirement. (Source: Microsoft)
By embracing No UI, the design focuses on your needs. There’s no interface for the sake of interface. Instead, computers are catered to you.
Your car door unlocks when you walk up to it. Your TV turns on to the channel you want to watch. Your alarm clock sets itself, and even wakes you up at the right REM moment.
Even your car lets you know when something is wrong:
When we let go of screen-based thinking, we design purely to the needs of a person. Afterall, good experience design isn’t about good screens, it’s about good experiences.
Principle 3: Create a system that adapts for people.
I know, you’re great.
You’re a unique, amazingly complex individual, filled with your own interests and desires.
So building a great UI for you is hard. It takes open-minded leaders, great research, deep insights...let’s put it this way: it’s challenging.
So why are companies spending millions of dollars simply to make inherently unnatural interfaces feel somewhat natural for you? And even more puzzling, why do they continue to do so, when UI often has a diminishing rate of return?
Think back to when you first signed up for Gmail. Once you discovered innovative features like conversation view, you were hugely rewarded. But over time, the rate of returns have diminished. The interface has become stale.
Sadly, the obvious way for Google to give you another leap forward is to have its designers and engineers spend an incredible amount of time and effort to redesign. And when they do, you will be faced with the pain of learning how to interact with the new interface; some things will work better for you, and some things will be worse for you.
Alternatively, No UI systems focus on you. These systems aren’t bound by the constraints of screens, but instead are able to organically and rapidly grow to fit your needs.
After you sign up for Trunk Club, you have an introductory conversation with a stylist. Then, they send your first trunk of clothes. What you like, you keep. What you don’t like, you send back. Based on your returns and what you keep, Trunk Club learns more and more about you, giving you better and better results each time.
Diminishing rate of return over time? Nay, increasing returns.
Without a bulky UI, it’s easier to become more and more relevant. For fashion, the best interface is no interface.
Another company focused on adapting to your needs is Nest.
When I first saw Nest, I thought they had just slapped an interface on a thermometer and called it “innovation.”
As time passes, the need to use Nest’s UI diminishes. (Source: YouTube)
But there’s something special about the Nest thermostat: it doesn’t want to have a UI.
Nest studies you. It tracks when you wake up. What temperatures you prefer over the course of the day. Nest works hard to eliminate the need for its own UI by learning about you.
Haven’t I heard this before?
The foundation for No UI has been laid by countless other members of the design community.
In 1988, Mark Weiser of Xerox PARC coined “ubiquitous computing.” In 1995, this was part of his abstract on Calm Technology:
“The impact of technology will increase ten-fold as it is imbedded in the fabric of everyday life. As technology becomes more imbedded and invisible, it calms our lives by removing annoyances while keeping us connected with what is truly important.”
“...Norman shows why the computer is so difficult to use and why this complexity is fundamental to its nature. The only answer, says Norman, is to start over again, to develop information appliances that fit people's needs and lives.”
In 1999, Kevin Ashton gave a talk about “The Internet of Things.” His words:
“If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss and cost.”
Today, we finally have the technology to achieve a lot of these goals.
This past year, Amber Case talked about Weiser-inspired location awareness.
There’s a lot we can achieve with some of our basic tools today.
Let’s keep talking.
Oh, there’s so much more to say:
Watch the Cooper Parlor. After this essay exploded on Twitter, Cooper hosted a No UI event with special guest, design legend Donald Norman.
It is very rare indeed when designers eagerly anticipate a release from Microsoft. This October’s Windows 8 release will see a new Windows Phone, the second version of the Metro UI for mobile devices. But more significantly, Windows 8 will bring the Metro interface to the desktop.
Metro on mobile and on desktop.
Metro, which won over designers, developers, and users with its colorful, transit-inspired, and minimally geometric interface, was first bundled with the Windows Phone 7 package. It was a risky - but undeniably insightful - move. Rather than simply playing catch-up to Android and iOS, the gridded interface stakes a dramatic new claim on how an OS should function on a mobile device. Rather than presenting a “home screen” where a user launches applications - an idea borrowed directly from the desktop - Metro uses the blocky launch icons to directly display the latest information and updates from within the apps themselves.
In other words, rather than launching your news app to check for the latest headline, Metro would feature those headlines right on the home screen. You’ll click on an app once you already know something of interest lies beneath. But Metro’s most striking implication is that you might not even open those apps as often anymore.
However, Microsoft’s approach to the home screen was not the first attempt at a radical departure from established mobile home screen norms. In 2010, an Android app called SlideScreen was on a similar mission, and its untimely demise shows the complications of innovating on the home screen in an environment where the handset makers and the creators of operating systems make the rules.
The SlideScreen app on Android.
SlideScreen, developed by Larva Labs, cleverly replaced the Android home screen with snippets of content you depend on the most. Get the gist of your inbox, absorb the latest headlines in your feeds, and check in on the churn of tweets and Facebook updates every time you idly flash on your phone. It was space-efficient without looking cramped - austere, but with personality.
Many early Android users (this author included) grew dependent on the immediacy: there was no need to navigate to an app or pull down a pane. The phone stopped being another media channel and became a tool again.
But in August of 2011 it was over. An ill-timed security update prevented the app from reading data from Gmail. SlideScreen could no longer “hot-wire” you straight to your messages. Developer Matt Hall begrudgingly admitted: “As of right now there appears to be no workaround as this is an intentional change to restrict access to the data. [..] As of this morning we’ve removed the app from the market.” SlideScreen was dead.
It’s a shame. SlideScreen was an important counterpoint to the prevailing norm on phone operating systems: the home screen as a list of apps you can launch. It’s a limiting norm that makes phones less useful. The “app-launcher-approach” to home screens essentially traps information and functionality in digital “lockboxes” that can’t be accessed without starting an app.
SlideScreen’s story highlights how apps themselves can’t innovate without the alignment of vision with the creators of the operating systems, consumer services, and information providers. Apps also depend on digital lockboxes that are stable and supply open data. But these conditions weren’t present in 2011, and they are even less so today. And when software ecosystems become more closed, apps like SlideScreen can’t flourish. That is likely why the Apple iOS home screen paradigm has been remained largely unchallenged.
Five years ago, the launch of Apple’s first iPhone in 2007 popularized this paradigm of precious, “gemstone” app icons. Instrumental in the phone’s success, the icons simplified access to functionality and made it obvious to novice users what a smartphone could actually do. But simplicity comes at the cost of information density and efficiency. Apart from the occasional push notification, there are precious few hints at what relevant information might be behind each icon.
Yet, despite these shortcomings, and in spite of the efforts of the Larva Labs and the Windows phone team, there’s a real possibility that the gemstone paradigm becomes this decade’s default mobile navigation system. Why is this worrisome? Interface paradigms tend to die slow deaths.
On stationary computers and laptops, the same antiquated metaphor has guided interface development since the early 1970s. The “desktop metaphor”, as it is called, treated the computer screen as an imaginary desk, where objects like “files” and “folders” could be put. Despite some valiant efforts (at Cooper we took our stab with the Litl netbook, and Google attempted to bring the beast down with their Chrome OS), this concept has displayed a frightening resistance to technological progress and user needs.
The same thing can happen on our phones. We are facing the risk that inarticulate gemstones could become the primary way you operate your phone, even when new technology begins allowing for far superior ways to interact with smartphones. A smartphone’s ability to predict and automate actions has massively improved alongside the evolution of its impressive stack of sensors, cameras, microphones, and touch screens. Based on this knowledge, there are many ways a phone can tailor a home screen to the needs of the situation or time of day. The phone can begin to guess what I might need to know. Wouldn’t that be nice - a home screen with information I care about, rather than a list of the apps I have downloaded?
As Metro seeks to demonstrate, the main purpose of smartphones should not be to launch apps. Smartphones have a lot of impressive functionality, but not all functions are equally important. Not all functions need an icon. Home screens should facilitate important functions, and hide trivial ones. It should make it easy to communicate, help me be aware of time and place, and anticipate common information needs. The standard home screen as we know it today is not up to the task, so let us look for better ways. Let us leave the familiar behind. A better home screen is out there.
Telling visionary stories takes more than great tech, it takes imagination, warmth and a devotion to showing a world made better by your inventions.
News of Google's Project Glass lit up web chatter of the design and tech community. On the one hand it was a provocative leap forward, Google stepping boldly toward hardware that is category defining, and on the other showing a vision of the future that is largely uninspiring.
We'll need to work all this out, but let's talk about the Google's vision for this amazing tech. Watch the vision video and you see interactions that will all be familiar: Siri like natural language recognition and commands, location and time notifications, weather forecasting, real-time text and video chat, GPS mapping and location sharing, checking in, sharing photos to social networks, etc... There is a subtle shift in stance, from a more sovereign interaction to one that is more transient. With phones we have a more explicit intentional interaction, Glass is more of a dip-in-and-out of the digital experience. Instead pulling out your phone to read your twitter feed for the whole 20 minute commute home, Glass has been envisioned as more of a light technology augmentation to the real world.
But there's little that's emotionally resonant. It feels like a demonstration of how you'd do all the stuff you do on your iPhone today in your Glass tomorrow. The focus is on performing tasks that highlight features. It comes across like a technology searching for an application.
I don't mean to be down on the tech. When I first saw it I was really excited about the possibilities. This is groundbreaking technology making the screen fully portable and hands free, it's about liberating yourself from the effort required to interact with a phone. Of enhancing your interactions with the world around you. Google's got their engineers making really cool stuff, but when it comes to imagination or emotional resonance; telling a story that makes you connect with and desire what they are making, it's just not there.
Let's look at ways the storytelling could have changed more effectively invite us to imagine a future that's better with Glass.
Helpful insights beyond the moment
Pushing the local forecast into your eye every time you look out the window seems annoying and obnoxious. Technology is pushing it's way into your experience.
Apple shows the same need for insight into weather, but it's prompted by the user who asks about the weather in a city she's clearly packing to visit. The value of the information is greater to the traveller who can't just look out the window and get a pretty good idea about the local weather. By giving her a forecast for New York the phone is more helpful, it's giving her information at a moment where she can make the most use of it. We connect with the experience because we know how difficult it can be to arrive at a destination where we've packed the wrong clothes. It's worth noting that Apple doesn't even show you the results, they don't have to, you fill in the details yourself.
The warmth of connection
Next up Google shows your friend reaching out to see if you want to hang out. Sweet right? But no, it feels like you're forced to translate everything into text instead of simply using your voice to communicate. In the Glass interface chat's a silent activity with beeps and bleeps for feedback. What if instead you could simply chat, you know, with your voice, it could still be asynchronous, it doesn't need to be a phone call. Your voice can also be parsed into text, but giving you both allows for a richer deeper connection. You get the warmth and excitement in your friend's voice, not a text message you have to read.
Thinking a little bit ahead
As you head into the subway Google let's you know the subway's not running. Drat. You've already hoofed it here, now your only option is to walk.
How much more helpful if Glass knows that you usually catch the 6 and tells you that service has been suspended before leaving home. This gives you a chance to grab your bike instead.
Setting your hands free
Speaking of bikes, how did bike riding NOT make it into the video?
Walking is an activity slow enough to stop and pause to check directions. Biking is fast and to do it safely requires both hands. Glass frees your hands up.
Getting a little heads-up display action letting you know your speed and your distance covered would be a great augmentation to the ride.
Helping you remember the important things
Reminders are helpful, but hardly the stuff of great narrative. Google shows setting a reminder to buy tickets to a show. Meh. I mean sure it's something you'd want to remember, but as a story there's little to connect with.
Apple's not all that much better, the girl who's running asks Siri to remind her to call Chris when she gets home. Her speed clearly makes it harder for her to type a reminder, and with Siri she can save one without breaking her stride, but it's so generic we don't really connect. Why? Because saving a reminder with your voice is technologically difficult, and doing it well has taken some serious engineering. but really it's about as exciting as watching someone write a list. The magical experience of reminders is when they help you remember the thing you'd have otherwise forgotten, and maybe picking something that would be a real shame to miss.
Walking up to your front door and getting a reminder to call dad and wish him a happy birthday? Now that's something we can all connect to and see the value.
Making location awareness magic
Next up Google takes us to Strand bookstore. Glass makes sure you know arrived by pushing the location to your eye.
It's this kind of demonstration that seems like a gratuitous use of technology. Isn't the big red signage enough of a confirmation that you've arrived?
Duplicating the busy information density we experience in an urban environment isn't an experience you'd really want to sign up for. If you wanted to go seriously visionary why not propose a not-too-distant future where all the signs and advertisements screaming for your attention have been removed. The beauty and dignity of architecture is preserved as the signage is moved into our smart devices like Glass. Then pushing the bookstore name to me becomes helpful.
So next there's an opportunity to interact with with this guy.
But you don't take it. You ask your Glass eyepiece instead, it gives you directions for walking 20 feet. There's a rosy picture of the future. No more interactions with strangers, no basic self sufficiency.
There's a few ways Google could have taken this to make it more compelling.
Make the store unbelievably busy so that it would be a long wait till you could ask someone for the location of the music section, at least then you're not being anti-social you're just resourceful. But still, bookstores have some of the most dependable signage, finding the right section isn't really all that hard.
What if instead, you could walk into the supermarket and as you walk the isles Glass uses your location and shopping list to simply pop-up items for you to grab from the shelf? Now you're doing something you can't really do today. It's helpful, and kind of cool. There's no way you could ask the cashier to show you where all the items on your shopping list are located. With Glass shopping list assistance you're able to walk into a store you've never visited, grab everything on your shopping list which happens to have been updated by your partner just a moment ago, and leave, sure that you've got everything you need.
Continuing in the bookstore Google shows you checking to see if your friend has arrived at the bookstore yet. No you don't just walk out to the street or wait for him to come grab you, you use Glass to seek his location and it tells you he's half a block away.
Creepyness aside, it's not saving you from a lot of work or discomfort. You could have just stepped out to wait for your pal on the street. Just cause you can do it doesn't mean it's inspiring or visionary. Location awareness of other people is a hard thing to do right. Even with friends there's lots of privacy issues, and anyone who's seriously tried to make apps that leverage the power of tracking has ended up with low adoption or swift negative reactions.
Moving on you follow your buddy to a nearby coffee truck. Your first instinct is to check in. Seriously? OK, maybe Google needed to show it to compete with Foursquare, but come on, this isn't particularly engaging for us viewers.
Also, one day in the future you'll still need to manually check in? If checking in is your thing, can't it just be something background and automatic, or at least less of a process? In the Glass vision it would be the same amount of work to do it with your phone.
So you get a cup of joe and then part ways with Paul.
Sure there's a cut or two to edit out the stuff that's not showing off Glass, but this makes the story mostly about you using your friend Paul to find a good cup of coffee which is something you could have just messaged him about. The heavy focus on showing off the technology has robbed the story of its humanity. First you didn't chat with the bookstore clerk, then you only meet up with a pal to get insider info. You seem like a cold jerk.
Apple takes a totally different approach to telling the story with Facetime. The phone is used to bridge the gap, to overcome the barriers of physical space.
The phone frames the entire interaction but instead of getting in the way it falls to the background. You quickly find yourself transported into a deeply intimate moment, the story connects instantly, and you empathize with the people and appreciate how the technology makes this kind of emotional connection possible. The people here clearly care for one another and value spending this moment together.
After parting ways you come across a cool piece of street art. You want to share it and in a second can capture the image and upload it.
The process is simple, simpler and easier than pulling out a camera. It's effortless and really shows a sweet way to capture images. Here again the magic of the tech is clear, but the story fails. Street art is hip, but people are what matters, humans are innately drawn to faces.
What about grabbing a few photos of your buddy while hanging out?
And how cool would it be if it auto recognized your friend and added the pictures to his image stream too?
The last scene is easily the most enchanting. It feels a bit contrived, but we're willing to overlook it because it actually shows how this technology might bring us closer together.
It's one way video chat at first, and because there's no camera looking at your face your friend can't see you. What's the next best thing? Sharing your view.
It's a delightful ending for an otherwise uninspiring story. But it didn't need to be that way. When we tell stories, especially around a future shaped by new technology it's important to keep the focus on people. Our gadgets and tools aren't the point, they are means, not the ends. Every twist and turn in the story should help us see a world that is made better, not just different. When you present a vision, strive to deliver a story with deep emotional resonance. We don't need the technology to be perfect, or the applications mindblowing, but we do need to see through it to the deeper more essential need, that is our desire to be connected, to have meaning and share life with one another.
Delivering enchanting experiences
A final thought. An Apple commercial for Siri shows a girl on a roadtrip gazing up into the night sky and wondering what the Orion constellation looks like.
Siri delivers a nifty image.
Google, Glass was made to best this! Looking into bright glowing phone to try to match it against the night sky would be a terrible experience.
With Glass you simply look skyward. Glass can magically connect the stars for you.
Now that's the way to learn about the night sky. Read More
It has never been easy to demonstrate the value of interaction design, but the ubiquity of video as communication tool has helped a lot. Video is a great way to reach online audiences: It is easily accessed on YouTube and Vimeo, and it is expected to be short and to the point. With little investment, design firms can capture high-quality video with any number of relatively low-cost cameras, and use powerful editing tools to tell our stories. When a video is done well, it helps humanize the design, and gives a peek into the methods behind it.
At Cooper, we have been experimenting with video, and we pay attention to what others are doing in that space. As we share more about our process, we are also changing our clients' and the general public's expectations of disclosure. While it's a great idea in theory, in practice we find that finding the right formula can be tricky. For example, video is a more spectacular and emotionally effective medium than static blog posts, but subtle mistakes in tone and presentation run the risk of coming off as pretentious, overproduced, off-topic or just downright goofy. Here are three big things we've noted about how to get it right.