At Cooper, we spend a considerable amount of time understanding the experience requirements of the products that we're designing. Our client stakeholders often request a design that our users will react to as feeling simple, intuitive, innovative, and so on. In many cases the products we're asked to design must display a sense of trust.
Why is trust good?
Trust can play an important role in the successful adoption of a product. For example, in data backup and management, if the software does not give a user, such as a backup administrator, the confidence that his data is safe and securely managed then he's unlikely to want to use, or switch to, this software. Especially, when considering that his job is on the line in cases where servers go down and critical data could be lost. Likewise, for online banking websites, customers want to know that their personal information is securely housed and not at risk of being stolen.
How do we make software that appears trustworthy?
All aspects of design and technology contribute to improving a product's trustworthiness whether it be through the visual presentation, the tone of content, the accurate and clear communication of data, or the brand awareness of a company or product. Ultimately, when considering visual design it's our task to create a visual language that appears professional, high in quality, and appropriate to the user's expectations. For content and data, it should be clear, concise, error-free and accurate. Finally, repeated interactions with brands can build trust over time if consistent, dependable, and memorable.
When trust can be bad
Right now you might be wondering, "Trust can be bad?" You've got a point. No client has ever asked me to design a software application, website, or device that's intended to be untrustworthy. But, our continuing reliance on complex information systems could lead us down the path of blindly relying on data, even when we don't fully understand that data. Trust must always be cultivated in users, but too much trust, like too much of anything, can be a bad thing.
Consider the financial meltdown. I don't pretend to fully understand what has happened, who's to blame, and how it could have been prevented. What seems clear is that many of those responsible were only looking out for themselves. Michael Lewis, author of Liar's Poker, discusses this issue that began decades ago in "The End of Wall Street's Boom,"
The shareholders who financed the risks had no real understanding of what the risk takers were doing, and as the risk-taking grew ever more complex, their understanding diminished. The moment Salomon Brothers demonstrated the potential gains to be had by the investment bank as public corporation, the psychological foundations of Wall Street shifted from trust to blind faith.
In assuming that a system is correct, users assume that what they are doing is correct, ethical and in the best interests of everyone. In doing so, they (perhaps unconsciously) absolve themselves of accountability. It is incumbent on the system to ensure that users are fully aware of their accountability, so the system must leave no doubt about that fact.
In Jerome Groopman's How Doctors Think, he discusses a surgical protocol for cardiac tamponade, a condition in which "fluid has accumulated around the heart and was compressing it." In the story, Dr. James Lock retells of how a standard procedure, where a needle is used to remove the fluid, had been nearly fatal for a young patient,
"Why do you stick the needle under the xiphoid?" Lock asked. I paused. "Because that was how my teachers taught me in my training."
"And why do you think your teachers taught you the way they did?" Lock asked.
"Because that's how they were taught."
By not fully understanding the procedure or its history, the medical staff ceased to improve the procedure and more critically put the patient at great risk.
When viewing complex systems, users should not only understand data but, when necessary, ascertain its origin. Consider the frequency with which patients receive the wrong medication in healthcare environments. Relying too much on a system could give a nurse the false sense that she has administered the correct medication when in actual fact, a pharmacist prescribed the wrong dosage in her computer.
So what's the solution?
The solution is to dive deep into the research problem and fully understand the trust need from your stakeholders and users. Regarding the stated examples, users should be made to feel like the software they're using is reliable and dependable. But most of all, users should understand the system, be accountable for managing it, and be empowered to change it.