Cooper + Studio Dental: Shining a Spotlight On Service Design

How service design helped this startup learn to tackle their business step-by-step.

As part of our continuing mentorship program at Rock Health, Cooper teamed up with Studio Dental co-founders Dr. Sara Creighton and Lowell Caulder to help them disrupt the dental industry with their mobile dental service. The startup gained early support from a successful $40K Indiegogo campaign, and for Cooper, this project has been a great opportunity to demonstrate the value of service design.

If I were to put a finger on the biggest ah ha moment, it was probably, “Oh, services are designed!”

- Lowell Caulder, co-founder, Studio Dental

In this conversation, the co-founders share how and why Studio Dental was born, and they reveal an “ah ha” moment or two, including the discovery that the impact of service design is everywhere, and central to any industry’s success.

Dr Sara Creighton and Lowell Caulder, founders of Studio Dental

Read More

Cooper, Augmedix and Google Glass: No Real Estate? No Problem

Interaction designers today are really good at designing screens. Designing for Google Glass took us out of that comfort zone, and in some ways back to the basics. It reminded us of that truism that the raw building blocks of user experience are not screens—they are experiences.

Google Glass is in many ways not ready for prime time, but makes perfect sense for certain specialized applications, like what Augmedix has envisioned for doctors, who need to capture and reference key information while keeping their full attention on patients. Hands-free operation is one of the key strengths of today’s iteration of Glass. Medicine is particularly rich with hands-free mission critical use cases, and Augmedix is taking the first step down that path. Others are imagining similar applications for Glass, such as for first responders in emergency situations.

Read More

Augmented Experience

Photo via Reuters / Carlo Allegri

Let’s be honest: Google Glass looks pretty silly. Its appearance is out of time, futuristic, and obnoxiously so. And it’s out of place in daily life—a strange accessory with mysterious purpose, as if someone were to walk around all day with a skateboard on a leash.

But Glass also points to an intriguing future, one in which the line between using a digital device and simply going about daily life is removed. Whereas traditional spectacles have a corrective purpose to see reality more clearly, Glass offers a new category of lenses that promise to augment the reality we see. It opens a vast new frontier for the practice of interaction design that, like the Wild West, is full of lawlessness and danger and promise. And it is the UX community that will shape this landscape; we will determine it’s character, and the impact it will have on people’s lives.

A key question all this raises is: what “reality” is Glass augmenting? At the moment, being a Google product, the augmentation is designed to primarily target the urban economic and social spheres. Looking down the street through Glass, you may see restaurant store-fronts adorned with floating metadata describing the cuisine type and star-ratings by previous diners. Turning your head, an indicator points in the direction of the location of your next calendar appointment. Peering at a product on the shelf, prices for similar products are displayed for easy comparison. You’ll always know where you are, where you need to be, and what you’re looking at. The reality that Glass augments is a realm of people, objects, places of business, and locations. In other words, what can be expressed in a database and efficiently searched.

Toward a better future

At this point in the conversation, the story usually veers into the realm of exasperation and despair. Google Glass represents the death of spontaneity! It will systematize and computerize our lives! Organic experience will be lost! (And, most insidious of all) Google will monitor and monetize every saccade of our eyeball, every step we take!

“Big brother” from the film adaptation of Orwell’s 1984

Given the penchant for technologists to base business models on advertising and “big data” about their customers, it is not surprising that Google Glass can be seen as a kind of portable panopticon. But I think the truth that this device foreshadows is something potentially more benign, and almost certainly beneficial.

The dystopian narrative that depicts a society dominated by machines and ubiquitous surveillance is common, expressed through fiction, film, and even journalism, which tends to draw on the same sinister rhetoric. George Orwell’s 1984 describes the homogenization and suppression of culture through rules, systems, and constant surveillance. In a more recent popular expression, Pixar’s Wall-E imagines a future humanity composed of zombie-like innocents, shuttled along by automated chairs, staring feebly into digital screens, mobilized—and controlled—by machines. The plausibility of these futures is made even more vivid by the unfolding story of the depth of NSA surveillance.

To paraphrase a recent piece by Don Norman, it all depends on how we design and develop augmented reality applications. If we manage to create useful and utility-producing applications with wearable technologies like Google Glass, people will benefit. This seems at first more like a truism than truth. But the obviousness of the statement belies the underlying premise, which is that Google Glass and its future iterations are simply a canvas on which we can write the future of our “augmented” everyday experience. So let’s not leave it all up to Google, shall we?

Big ideas

Ideas for the positive future of augmented reality abound. Augmedix, for example, is a small company with a vision of Google Glass re-shaping the doctor-patient relationship. Increasingly, the burden of the new and fraught world of digital medical records is damaging this interaction. Doctors stare at screens instead of faces, they spend as much time clicking checkboxes and radio buttons as they do examining the bodies and listening to the voices of the people under their care. Augmented reality could turn this scenario on its head by allowing doctors to look at and converse with their patient while simultaneously accessing and transmitting important information through Glass. This will almost certainly lead to fewer errors, an increase in trust, and ultimately better health outcomes.

A doctor wears Glass with the Augmedix app.

Or consider William Gibson’s Spook Country, a novel in which a central character creates “locative art,” what you might call augmented reality sculpture. Imagine looking at a city fountain with your augmentation goggles and seeing a bloom of light and color where others see only water. That we could transform our physical landscape in a way that enhances its beauty—rather than simply enhancing its economic potential—is a stunning notion. Unlike 3D movie glasses or straight-up “virtual reality,” the idea of a physical/virtual mashup offers us a chance to experiment and play in realms previously only available to the world of screens and displays, without losing the notion of being present in a place, something virtual reality cannot avoid. We remain in the real world.

The design of augmented reality

The first attempts to harness the power of Glass-like technology will be “ports,” shoe-horning old functionality into a new form factor. Text and email messages will appear, caller ID will notify you of a phone call, the front-facing camera will take a picture or video on command. But none of these use cases address new goals. They simply make achieving old goals incrementally faster or more convenient. I don’t have to lift my phone and look at the screen to see a text message or know who’s calling. I don’t have to lift my camera and press a button to take a picture. The difference in my experience enabled by porting functionality from my phone to Glass is a difference of degree, not a difference in kind.

More interesting will be the forays into using augmented reality tech to solve previously unmet goals. Augmedix is a good example, because it bucks a trend toward less personal medicine and solves both a doctor and a patient goal. Locative art is similarly interesting, because it provides an entirely new artistic medium and way of experiencing that art. Mapping and orientation in a visually augmented world represents another fundamental change, because it bridges the gap between the abstract 2D map and the immediately actionable—a translation that currently happens in the human brain.

Go get ‘em

Augmented reality is in its infancy. Google Glass still faces some serious challenges, especially on the hardware front—miniaturizing the device and making it less obtrusive is necessary to make it less like pulling a skateboard on a leash everywhere you go. But the frontier for experience design this device opens up is huge, and doesn’t have to remain within the boundaries Google sets. Part of our challenge and calling as a UX community is to think deeply about what an augmented experience feels like, and how it shapes people’s lives. As you would with any user experience, let unmet user goals guide your design.

Your role in this revolution is just beginning.

Raising Funds and Raising the Bar: Hats Off to Practice Fusion

When Practice Fusion recently announced it’s spectacular $70M financing round, cheers went up not only throughout the healthcare sector, where the company is one of the fastest growing health tech pioneers, but also within the halls of Cooper, where the design and prototype for Practice Fusion’s 2013 IxDA award-winning ipad app was born.

Stefan Klocek, former Cooperista and now Practice Fusion’s Senior Director of Design, had a critical role in the development of that iPad application while at Cooper, and now that he has joined Practice Fusion, he took a moment to get on the phone with us and share his unique inside perspective on the impact design can have on businesses.

“It’s not been hard to trace how Cooper’s original design for Practice Fusion’s mobile platform became a seminal turning point in how our business makes products today,” Klocek said, after we exchanged verbal high-fives. “Following the Cooper engagement I’ve been able to see firsthand how the organization shifted its perspective from design being something added on later, to actually driving decisions around branding and product development.”

And Practice Fusion’s investment in design is growing. “Our design team went from 5 to 17 people in six months,”Klocek added. “The original mobile app project that Practice Fusion worked on with Cooper really demonstrated to everyone here the value of design, ultimately driving decisions to rebrand our website and redesign our flagship product.”

To which we say, huzzah!

Big congratulations to Practice Fusion for continuing to raise the bar and the standard of data management for healthcare.

Can illegal networks of zombie computers be a force for… good?

Whenever a major website has significant downtime, people start to wonder: is it intentional? Is Anonymous behind it? Or a secretive group of enemy government hackers?

It’s a reasonable assumption, as it turns out that DDoS—distributed denial of service—attacks are relatively easy to pull off these days. To accomplish it, a ne’er-do-well need only harness thousands of “zombie” computers, point them toward their intended target, and harass the web servers with so much traffic that they are overwhelmed. It’s a temporary effect, but can cause severe economic damage.

It used to be that coordinating such an attack required a great deal of skill. A criminal needed to first infiltrate those thousands of machines using some kind of trojan horse or other malware. To harness their collective power, they would stitch together a “botnet” by designing a way to control them all remotely by issuing them commands, then bend them all to whatever nefarious purpose they have in mind. (Besides DDoS attacks, botnets also send a lot of spam.) Today, however, pre-configured botnets can be rented for a pittance. One source claims to rent a 10,000-strong network of zombie machines for $200.

This got me wondering: why not rent a botnet, and use it for good?

By Tom-b (Own work) [CC-BY-SA-3.0], via Wikimedia Commons

Read More

Behind the scenes of Practice Fusion’s EMR for iPad app

To create our new iPad interface, which just released as a beta version to active providers, Practice Fusion partnered with the award-winning design firm Cooper. Cooper is renowned for its work across the design world, from startups to over a third of the Fortune 500, with its emphasis on creating simple and enjoyable user experiences.

Testing the iPad EMR 300x200

Our iPad User Experience Designer, Kramer Weydt (R), worked closely with Cooper’s Stefan Klocek (L) to make the Cooper design a reality. We met to chat about the process:

First of all, what exactly was your role on the iPad design?

Stefan Klocek: We are user experience designers, meaning we focus specifically on how users interact with the EMR. Instead of just designing from scratch, we first understand our user’s needs and we determine how we can fulfill those needs with the technical resources we have available.

Kramer Weydt: We’re not doctors, but we understand how people interact with devices and we learn from doctors what they need from this technology through research and interviews.

Read More

Driving innovation in healthcare organizations

Paper-prototype2.png

Last week, I joined entrepeneur Enrique Allen and designer Leslie Ziegler at Kaiser, where we spoke to doctors from their internal innovation program. We hoped to inspire them as well as to illustrate how design could be used inside Kaiser to improve processes and overall care.

I referred to two case studies—Cooper’s work on the Practice Fusion iPad-based EMR, and a visioning project around the patient clinic experience. In these, I illustrated how we identify problems, generate ideas, and drive decision-making during detailed design.

Both case studies highlighted ways in which multidisciplinary teams can make progress by using cheap prototypes that are quickly iterated. In the case of the Practice Fusion app, we used paper prototypes to test and evolve everything from content organization to animation. We did not need to get permission of a hospital IT staff or work with an engineer; we simply needed a new piece of paper and a Sharpie. Prototyping a service starts in a similar manner. Using storyboards and cartoons, we were able to generate and evaluate myriad patient journeys without making costly process and staffing changes.

Many of the questions during the Q&A were symptomatic of a large organization that is beholden to fluctuating regulation. One attendee asked how to get front-line staff on board when they’re already suffering from change fatigue. This will require both communication and empowerment. At Cooper U we teach the value of a radiator wall (a wall showing the progress and decisions of a project) in rallying a team and communicating with an organization; this kind of tool could help establish a sense of consistency and direction amid large-scale changes.

All of Kaiser’s departments were represented at our talk, from general practitioners to specialists. All are charged with improve patient care and overall quality. I appreciated the opportunity to bring some lessons from my experience in healthcare and design, and I’m looking forward to seeing what they tackle next. Read More

Transforming healthcare infrastructure

(This article was published in the November/December 2010 issue of interactions magazine.)

It seems likely that we find ourselves at an inflection point in the evolution of healthcare. While the situation has certainly been brought to a boil by recent American political events, the opportunities for change fit into a much larger context; they have the potential to truly transform the delivery of healthcare globally.

Unlike some, I don’t believe our current healthcare system is totally broken. I’ve conducted design research in quite a number of clinical settings and have consulted for businesses representing many different aspects of the healthcare industry, including provider networks, medical-device manufacturers, and even health insurance companies. I’ve seen magic worked on regular basis, and from a historical (and global) perspective, the standard of care in the developed world is astoundingly high. I am in awe of the abilities of doctors, nurses, techs, and other clinicians to consistently function at a very high level despite the fact they’re forced to work with archaic infrastructure in less than ideal environments. (As for the insurance companies, perhaps the best thing to say is that they function to make money but could be dramatically more successful as businesses if they changed their approach to things.)

It is at this level—the level of infrastructure—where these big opportunities for transformation exist. It isn’t that we don’t know what kinds of patient and clinician behaviors and medical interventions result in healthy outcomes; it’s that at a systemic level, we’re not doing a good job facilitating these behaviors and driving appropriate interventions. The right changes here will provide a conduit for evolutionary change to cascade throughout the system to achieve dramatic improvements in the quality and cost of healthcare. Which isn’t to say that it also isn’t incredibly important for medical knowledge to continue to evolve; it’s just that we already know enough to dramatically drive up quality and drive down costs.

Many of the opportunities to improve our healthcare system can fit into three big categories: proactively engaging individuals to take better care of themselves; providing better interventional care beyond the walls of the hospital; and improving care delivery inside hospitals through standardization and better collaboration between clinicians, patients, and families. All three of these strategies require new infrastructure and perhaps a shift in the definition, role, and activities that characterize the hospital.

The first two ideas are mostly about what happens outside the hospital. These are things that architects wouldn’t traditionally worry about when designing hospitals. But that kind of thinking has gotten us into our current predicament, where the current built “environment” for providing healthcare is sometimes an impediment to necessary change. If we step back and define a hospital as the nexus for healthcare in a community, we have a platform on which we can imagine the ideal infrastructure for keeping people healthy as possible in a cost-effective way.

In the May+June 2010 issue of interactions, Hugh Dubberly suggested designers ought to help reframe what healthcare is and how it is delivered, as well as to reframe what it means for design to help. I couldn’t agree more, and in this spirit, propose reconsidering what healthcare infrastructure is necessary to better care for people, how design should address this new notion of infrastructure, and what this all means for the institution of the hospital.

Read More

Creating immersive experiences with diegetic interfaces

I like to think of Interaction Design in its purest form as being about shaping the perception of an environment of any kind. Yes, today the discipline is so closely tied to visual displays and software that it almost seems to revolve around that medium alone, but that’s only because as of now, that’s pretty much the only part of our environment over which we have complete control.

The one field that has come closest to overcoming this limitation is the video game industry whose 3D games are the most vivid and complete alternate realities technology has been able to achieve. Game designers have control over more aspects of an environment, albeit a virtual one, than anyone else.

Lately I’ve been thinking a lot about this idea that interfaces can be more closely integrated with the environment in which they operate. I’d like to share some of what I’ve learned from the universe of video games and how it might be applicable to other kinds of designed experiences.

In Designing for the Digital Age, Kim Goodwin criticizes the term “Experience design” as being too presumptuous because we don’t really have the power to determine exactly what kind of experience each person with their own beliefs and perceptions has. Even when we work across an entire event from start (e.g. booking a flight) to finish (arriving at the door), there are still countless factors outside our control that can significantly impact how a person will experience it.

Video game designers on the other hand can orchestrate a precise scenario since almost every detail in their virtual world is for them to determine. They can arrange exactly what kind of person sits next to you on a flight no matter who you are or how many times you take that flight.

That isn’t to say that videogames don’t have their limitations. Of course, it isn’t completely true that game designers can determine who sits next to you. They can only determine who your avatar sits next to. The most significant weakness of videogames is the inability to truly inhabit a designed environment or narrative. As much control as we may have over a virtual world, as long as we are confined to experiencing it through television screens and speakers, it won’t be anywhere near comparable to our real world.

Fortunately, there’s a growing effort to address this lack of immersion.

A key area of the problem lies in how we’re presented and interact with complex information diegetically, that is, interfaces that actually exist within the game world itself.

The 4 spaces in which information is presented in a virtual environment

Before continuing, it helps to be familiar with some basic concepts and terminology around diegesis in computer graphics, the different spaces of representation between the actual player and their avatar. The diagram above illustrates the four main types of information representation in games.

duke-nukem-3D.png

Non-diegetic representations remain the most common type of interface in games. In first person shooters, arguably the most immersive type of game since we usually see the scenery through our avatar’s view, the head-up display has remained an expected element since Wolfenstein 3D first created the genre. Read More

You’re only a first-time user once

We’ve all got our own personal benchmarks for what makes a good user experience. My personal list includes a few: Does it delight me? Will I recommend it to my friends and colleagues? Would I have used the same approach if I had designed the product? I’ve found among some product executives one particular pattern for this subjective evaluation criteria that is both humorous and troublesome: “Would my mother/grandmother/Luddite Uncle Bill be able to use this product on the first try?”

While there is a sort of noble aspirational quality to this kind of thinking—let’s make everything so dead simple that any person can use every product—it also sets the bar for the experience rather low. I imagine a sea of step-by-step wizard dialogs that target the lowest common denominator and force everyone else to step through the same predefined (and very explicit) experience. If I’m designing a product for people who have specialized knowledge, I want to leverage that knowledge in the product. Why force people to walk when they can run? I’ll want to provide these people with clear, appropriate pathways through the product, but I also want these specialized users to be able to forge a variety of their own pathways through the interface, dependent on the specifics of their situation or how they want to do things.

I once worked with a client to design an intravenous medication delivery device called an infusion pump. This is a machine that nurses in hospitals use to administer drugs to patients by attaching a bag of medication to the device and specifying delivery parameters such as how long and how fast to dispense the medicine. This is critical stuff; the consequences of a mistake could be catastrophic. Read More