Posts about Platforms & technology


A Public Display of Interface

Graphic Design from the Collection, May 14–October 23, 2016, SFMOMA, Floor 6

The last time I visited SFMOMA was 3 years ago, just before they closed for a major expansion of the museum. I worked on an interface that had just won an interaction design award several months prior to my visit and was on a designer’s high, daydreaming as I walked through the museum, wondering, would a modern art museum, like SFMOMA ever feature the design of something like an interface? Maybe I could be part of that history, contributing to an innovative interface or at least one little icon. 


Amused by the idea that one day there could be an exhibition detailing the mode of interface style throughout the years, I imagined the possible exhibits celebrating a functional, digital aesthetic.

Consenting Affordances: Web vs. Desktop and their Lovechild, Mobile

Wistful Analog: Skeuomorphism and the Rise of Flatland

Extravagant Limitations: Evolution of the Application Icon

Window Shopping: The Armors of Netscape, Explorer, Firefox, and Chrome


Could something like a 16x16 icon be on display in a modern art museum? Would something so tiny and digital be considered too silly and insignificant to rest under the same roof as a Rauschenberg, O'Keefe, or Warhol? With the awakening of a new SFMOMA, the interface daydreaming stopped and revealed a new reality: the recognition of an artform whose infancy rivals that of Pop Art but until now has yet to be collected, to tell a new story, found on floor 6 in the exhibit: Typeface to Interface.

Typeface to Interface.

I was reunited with those interface exhibition dreams during the opening of the overwhelmingly airy and far-too-much-to-see-in-a-day new SFMOMA. The 170,000 square feet of exhibition space turns the museum into one of the largest art museums in the United States (larger than the New York MOMA and The Getty Center in Los Angeles) making SFMOMA one of the largest museums in the world specifically focusing on modern and contemporary art. 

The exhibit takes selected work from the museum's permanent graphic design collection (spanning as far back as 1950) and joins it with examples of graphic design that has shaped the development of the interface – our modern day means of visual communication. Posters, visual communication systems, and annual reports are interwoven with a variety of technology platforms: the desktop interface, the stylus, and the mobile touchscreen – the tools and methods we’ve used to communicate via the interface. Underlying all of this are the foundations of visual design and as a result an understanding of human behavior.

Read More

With the awakening of a new SFMOMA, the interface daydreaming stopped and revealed a new reality: the recognition of an artform whose infancy rivals that of Pop Art but until now has yet to be collected, to tell a new story, found on floor 6 in the exhibit: Typeface to Interface. 

Read More

Is Online Voting the Next Big Thing?

Cooper has just posted the first in a series of articles on Elections for UX Magazine. Below is an excerpt from the article "Is Online Voting the Next Big Thing" written by Chris Calabrese. Check it out and read the full article on UX Magazine

Even though we live in a digital age, in Election 2016, you won’t be voting for Clinton or Trump via your phone or the web. 

You’re probably reading this article from your mobile phone. And with the US primary elections in full swing, there’s a good chance you’re learning about issues and candidates on the web, and sharing your political opinions through social media. Even though we live in a digital age, in Election 2016, you won’t be voting for Clinton or Trump via your phone or the web. Instead, if you go (43% of eligible voters didn’t vote in 2008), you’ll wait on a long line of US citizens to cast your ballot in a number of antiquated ways:

  • Paper Ballot - 1856
  • Mechanical Lever Machine - 1892
  • Optical Scan Ballot - 1962
  • Punch Card - 1964
  • Direct Recording Electronic (DRE) Voting Machine - 1974

It’s amazing that the predominant ways we are using today to cast votes in our government elections have remained virtually unchanged through the whole digital age.

Think about this: NASA sent two people to walk on the moon in 1969, when the entire agency possessed less computing power than your mobile phone. We can do better!

So what’s the problem?

In a nutshell, the biggest hurdle to online voting is insufficient security. You may wonder, in a world where billions of dollars of financial transactions occur on a daily basis, why can’t I vote for my government officials online? Unlike a financial transaction, which requires a transparent and auditable process for its security, online voting needs to not only be auditable but also anonymous. These conditions, according to a report published by the Atlantic Council in 2014, are “largely incompatible with current technologies”.

Read all of Chris' article here on UX Magazine.

Even though we live in a digital age, in Election 2016, you won’t be voting for Clinton or Trump via your phone or the web.

Read More

1DocWay: Increasing access to psychiatric care

We’ve been chatting with some of the startup founders we’ve met through Rock Health. They’ve offered us an inside look into how they’re tackling some of the biggest challenges in healthcare. Now we’re offering you a peek behind the curtain. 

Company: 1DocWay

Founders: Danish Munir, Samir Malik, and Mubeen Malik

Read More

We’ve been chatting with some of the startup founders we’ve met through Rock Health. They’ve offered us an inside look into how they’re tackling some of the biggest challenges in healthcare. 

Read More

A Brief History of Web Publishing

For you, this image may or may not conjure up intense feelings of nostalgia. For me, this was The Beginning of the Internet. It was a land accessed through a ritual of weird sounds, a tether of harsh but magical noise that made it possible for me to climb through the phone line into a realm of shared imagination. I had conversations with strangers and pretended to be someone different. I flew spaceships and fought dragons and hung around taverns quaffing ale and discussing the finer points of dagger combat with dwarves. My Nintendo became suddenly very lonely.

Read More

The original model of the web was that of publishing interlinked pages. We take a look at how this model has worked, how the technology has changed and muse a bit on the future.

Read More

Cooper, Augmedix and Google Glass: No Real Estate? No Problem

 

Interaction designers today are really good at designing screens. Designing for Google Glass took us out of that comfort zone, and in some ways back to the basics. It reminded us of that truism that the raw building blocks of user experience are not screens—they are experiences.

Google Glass is in many ways not ready for prime time, but makes perfect sense for certain specialized applications, like what Augmedix has envisioned for doctors, who need to capture and reference key information while keeping their full attention on patients. Hands-free operation is one of the key strengths of today’s iteration of Glass. Medicine is particularly rich with hands-free mission critical use cases, and Augmedix is taking the first step down that path. Others are imagining similar applications for Glass, such as for first responders in emergency situations.

Read More

Interaction designers today are really good at designing screens. Designing for Google Glass took us out of that comfort zone, and in some ways back to the basics. It reminded us of that truism that the raw building blocks of user experience are not screens—they are experiences.Google Glass is in many ways not ready for prime time, but makes perfect [...]

Read More

Designing the Future: Cooper in Berlin

Most software projects are built around the question “What are we going to do next?” But occasionally we’re asked to think farther out. Projects focused on the 5-10 year range are more about “Where are we headed?” and “What’s going to inspire people?” These are different questions to ask, and answering them changes the usual process of interaction design.

 

I’ve been thinking about these things for a while, and while at the MobX conference in Berlin I conducted a workshop where a group of 16 designers and strategists took a look at how you answer these questions. 

 
So…how do you do it? The core of the matter is to understand what’s going to be different in the future you’re designing for.

These kinds of projects are less about “What’s next?” and more about “Where are we headed?” and “What’s going to inspire people?”

Read More

Most software projects are built around the question “What are we going to do next?” But occasionally we’re asked to think farther out. Projects focused on the 5-10 year range are more about “Where are we headed?” and “What’s going to inspire people?” These are different questions to ask, and answering them changes the usual process of interaction design. I’ve [...]

Read More

Augmented Experience

Photo via Reuters / Carlo Allegri

Let’s be honest: Google Glass looks pretty silly. Its appearance is out of time, futuristic, and obnoxiously so. And it's out of place in daily life—a strange accessory with mysterious purpose, as if someone were to walk around all day with a skateboard on a leash.

But Glass also points to an intriguing future, one in which the line between using a digital device and simply going about daily life is removed. Whereas traditional spectacles have a corrective purpose to see reality more clearly, Glass offers a new category of lenses that promise to augment the reality we see. It opens a vast new frontier for the practice of interaction design that, like the Wild West, is full of lawlessness and danger and promise. And it is the UX community that will shape this landscape; we will determine it’s character, and the impact it will have on people’s lives.

A key question all this raises is: what “reality” is Glass augmenting? At the moment, being a Google product, the augmentation is designed to primarily target the urban economic and social spheres. Looking down the street through Glass, you may see restaurant store-fronts adorned with floating metadata describing the cuisine type and star-ratings by previous diners. Turning your head, an indicator points in the direction of the location of your next calendar appointment. Peering at a product on the shelf, prices for similar products are displayed for easy comparison. You’ll always know where you are, where you need to be, and what you’re looking at. The reality that Glass augments is a realm of people, objects, places of business, and locations. In other words, what can be expressed in a database and efficiently searched.

Toward a better future

At this point in the conversation, the story usually veers into the realm of exasperation and despair. Google Glass represents the death of spontaneity! It will systematize and computerize our lives! Organic experience will be lost! (And, most insidious of all) Google will monitor and monetize every saccade of our eyeball, every step we take!

“Big brother” from the film adaptation of Orwell’s 1984

Given the penchant for technologists to base business models on advertising and “big data” about their customers, it is not surprising that Google Glass can be seen as a kind of portable panopticon. But I think the truth that this device foreshadows is something potentially more benign, and almost certainly beneficial.

The dystopian narrative that depicts a society dominated by machines and ubiquitous surveillance is common, expressed through fiction, film, and even journalism, which tends to draw on the same sinister rhetoric. George Orwell’s 1984 describes the homogenization and suppression of culture through rules, systems, and constant surveillance. In a more recent popular expression, Pixar’s Wall-E imagines a future humanity composed of zombie-like innocents, shuttled along by automated chairs, staring feebly into digital screens, mobilized—and controlled—by machines. The plausibility of these futures is made even more vivid by the unfolding story of the depth of NSA surveillance.

To paraphrase a recent piece by Don Norman, it all depends on how we design and develop augmented reality applications. If we manage to create useful and utility-producing applications with wearable technologies like Google Glass, people will benefit. This seems at first more like a truism than truth. But the obviousness of the statement belies the underlying premise, which is that Google Glass and its future iterations are simply a canvas on which we can write the future of our “augmented” everyday experience. So let’s not leave it all up to Google, shall we?

Big ideas

Ideas for the positive future of augmented reality abound. Augmedix, for example, is a small company with a vision of Google Glass re-shaping the doctor-patient relationship. Increasingly, the burden of the new and fraught world of digital medical records is damaging this interaction. Doctors stare at screens instead of faces, they spend as much time clicking checkboxes and radio buttons as they do examining the bodies and listening to the voices of the people under their care. Augmented reality could turn this scenario on its head by allowing doctors to look at and converse with their patient while simultaneously accessing and transmitting important information through Glass. This will almost certainly lead to fewer errors, an increase in trust, and ultimately better health outcomes.

A doctor wears Glass with the Augmedix app.

Or consider William Gibson’s Spook Country, a novel in which a central character creates “locative art,” what you might call augmented reality sculpture. Imagine looking at a city fountain with your augmentation goggles and seeing a bloom of light and color where others see only water. That we could transform our physical landscape in a way that enhances its beauty—rather than simply enhancing its economic potential—is a stunning notion. Unlike 3D movie glasses or straight-up “virtual reality,” the idea of a physical/virtual mashup offers us a chance to experiment and play in realms previously only available to the world of screens and displays, without losing the notion of being present in a place, something virtual reality cannot avoid. We remain in the real world.

The design of augmented reality

The first attempts to harness the power of Glass-like technology will be “ports,” shoe-horning old functionality into a new form factor. Text and email messages will appear, caller ID will notify you of a phone call, the front-facing camera will take a picture or video on command. But none of these use cases address new goals. They simply make achieving old goals incrementally faster or more convenient. I don’t have to lift my phone and look at the screen to see a text message or know who’s calling. I don’t have to lift my camera and press a button to take a picture. The difference in my experience enabled by porting functionality from my phone to Glass is a difference of degree, not a difference in kind.

More interesting will be the forays into using augmented reality tech to solve previously unmet goals. Augmedix is a good example, because it bucks a trend toward less personal medicine and solves both a doctor and a patient goal. Locative art is similarly interesting, because it provides an entirely new artistic medium and way of experiencing that art. Mapping and orientation in a visually augmented world represents another fundamental change, because it bridges the gap between the abstract 2D map and the immediately actionable—a translation that currently happens in the human brain.

Go get ‘em

Augmented reality is in its infancy. Google Glass still faces some serious challenges, especially on the hardware front—miniaturizing the device and making it less obtrusive is necessary to make it less like pulling a skateboard on a leash everywhere you go. But the frontier for experience design this device opens up is huge, and doesn’t have to remain within the boundaries Google sets. Part of our challenge and calling as a UX community is to think deeply about what an augmented experience feels like, and how it shapes people’s lives. As you would with any user experience, let unmet user goals guide your design.

Your role in this revolution is just beginning.

Photo via Reuters / Carlo AllegriLet’s be honest: Google Glass looks pretty silly. Its appearance is out of time, futuristic, and obnoxiously so. And it's out of place in daily life—a strange accessory with mysterious purpose, as if someone were to walk around all day with a skateboard on a leash.But Glass also points to an intriguing future, one in [...]

Read More

Engaging Millennials - the UX Boot Camp: Wikipedia

As mobile devices become widely adopted, organizations are increasingly focused on designing engaging experiences across multiple platforms. At Cooper’s UX Boot Camp with Wikimedia, the non-profit took this a step further, challenging the class of designers to create a solution that facilitated content input and encouraged a new group of editors, specifically Millennial women, to contribute through mobile devices.

Read More

As mobile devices become widely adopted, organizations are increasingly focused on designing engaging experiences across multiple platforms. At Cooper’s UX Boot Camp with Wikimedia, the non-profit took this a step further, challenging the class of designers to create a solution that facilitated content input and encouraged a new group of editors, specifically Millennial women, to contribute through mobile devices.At Cooper, [...]

Read More

Television is dead. Or is it?

How the Internet, devices, and a new generation of viewers are redefining the “boob tube” of the future

Announcing the next Cooper Parlor: The Future of TV

When: Thursday, October 24th (Networking at 6, event starts at 6:30)
Moderated by: Richard Bullwinkle, Head of US Television Innovation at Samsung and Jeremy Toeman, CEO of the startup Dijit Media
Where: Cooper's Studio, 85 2nd Street, 8th Floor, San Francisco
Cost: $10
Tickets

Once, television was simple. Families gathered religiously around a glowing box to watch the latest episode of “I love Lucy”. Fast-forward to today: the Internet enables a multitude of new viewing devices, and wildly different viewing habits have turned “television” on its head. In this Cooper Parlor, Richard Bullwinkle, Head of US Television Innovation at Samsung and Jeremy Toeman, CEO of the startup Dijit Media will share some curious trends in media consumption, technological advances, and the evolution of show content and format. Then, they’ll lead a brainstorming session to rethink the “television of the future” together.

Here are just a few curious factoids we’ll explore:

  •  What is the #1 device for watching Netflix? The iPad? A laptop? It turns out it’s the Sony Playstation 3. Why do viewers flock to this device rather than the connected TV or an iPad?
  • Over 90% of all TV viewers use a second screen while watching TV.  How might this impact the way we design the television experience and programming?
  • Can you guess why 70% of connected TVs in the US actually get connected to the internet, but only 30% do in Europe?

Join us as we discuss where TV is headed, and generate new ideas for what television can be!

What is the Cooper Parlor?

The Cooper Parlor is a gathering of designers and design-minded people to exchange ideas around a specific topic. We aim to cultivate conversation that instigates, surprises, entertains, and most importantly, broadens our community’s collective knowledge and perspective about the potential for design

How the Internet, devices, and a new generation of viewers are redefining the “boob tube” of the futureAnnouncing the next Cooper Parlor: The Future of TVWhen: Thursday, October 24th (Networking at 6, event starts at 6:30) Moderated by: Richard Bullwinkle, Head of US Television Innovation at Samsung and Jeremy Toeman, CEO of the startup Dijit Media Where: Cooper's Studio, 85 [...]

Read More

Can illegal networks of zombie computers be a force for... good?

Whenever a major website has significant downtime, people start to wonder: is it intentional? Is Anonymous behind it? Or a secretive group of enemy government hackers?

It’s a reasonable assumption, as it turns out that DDoS—distributed denial of service—attacks are relatively easy to pull off these days. To accomplish it, a ne’er-do-well need only harness thousands of “zombie” computers, point them toward their intended target, and harass the web servers with so much traffic that they are overwhelmed. It’s a temporary effect, but can cause severe economic damage.

It used to be that coordinating such an attack required a great deal of skill. A criminal needed to first infiltrate those thousands of machines using some kind of trojan horse or other malware. To harness their collective power, they would stitch together a “botnet” by designing a way to control them all remotely by issuing them commands, then bend them all to whatever nefarious purpose they have in mind. (Besides DDoS attacks, botnets also send a lot of spam.) Today, however, pre-configured botnets can be rented for a pittance. One source claims to rent a 10,000-strong network of zombie machines for $200.

This got me wondering: why not rent a botnet, and use it for good?

 

By Tom-b (Own work) [CC-BY-SA-3.0], via Wikimedia Commons

Read More

Whenever a major website has significant downtime, people start to wonder: is it intentional? Is Anonymous behind it? Or a secretive group of enemy government hackers?It’s a reasonable assumption, as it turns out that DDoS—distributed denial of service—attacks are relatively easy to pull off these days. To accomplish it, a ne’er-do-well need only harness thousands of “zombie” computers, point them [...]

Read More

1 2 3