One piece of advice I have received in my first year here at Cooper is to avoid referring to personas as creations. Of course they are, and everyone knows it, but they work better if we refer to them as if they were real people in the world. For example, the conversation got off track a bit in one client presentation when I said, "We gave Tracy two kids, with one heading off to college " The discussion went from being about the personas and the design problem to being about why we gave Tracy two kids, and what tweaks might be made to better fit the persona to the client's expectations. Had I instead said something like "Tracy has two children, the older of whom is about to head to college," the conversation likely would have remained on track. Why is that the case?
Recently I ran across Daniel Dennett's theory of intentionality, which seems to explain it. The theory of intentionality works something like this: how we think about something changes depending on which type of agency we think is affecting it: physical, design, or intentional.
With physical objects, such as billiard balls knocking around a pool table, we assume physical agency. This means we use our intuitive sense of Newtonian physics to predict behavior.
If we assume design agency, as with machines and electronics, we predict behavior based on our understanding of what the system was designed to do. For example, a photographer doesn't worry about the internal workings of her camera when she adjusts the focus ring; instead, she trusts that it will just focus.
With animals and people we assume intentional agency, so we seek to understand their beliefs and goals in order to predict what they will do to satisfy these goals.
Psychologists refer to the chosen type of agency as the adopted stance. For example, when I assume a physical agency behind something, I'm taking a physical stance towards it.
For those less confident in philosophy, Dennett's theory has been tentatively borne out by experiments in neuropsychology, as well. In 2003 Helen Gallagher and Christopher Frith scanned test subjects with an MRI while they played a simple prediction game (Rock, Paper, Scissors) on a computer against an unseen opponent. Half of the subjects were told that their opponent was another human. The other half were told that their opponent was software. In fact, all of them were playing against software. The results of the MRI showed that the two groups were using different parts of the brain to play. Those believing themselves to be playing against software were using the abstract reasoning parts of their brains to try to infer the software pattern. They had taken a design stance. Those who thought they were playing against a human were activating the social parts of their brains, which are good at understanding beliefs and intentions; they had taken an intentional stance.
When designing software, we want to keep focus on the users and how they accomplish their goals. In Dennett's terms, we want to keep an intentional stance.
What cues us to take an intentional stance? The MRI experiment shows that it's simply the sense that there is a real person there. Referring to the abstract set of facts called "users" doesn't trigger that sense, but personas do. They have names, faces, believable back stories, and clearly expressed goals. This is enough to get us to think differently, to adopt the intentional stance that puts our focus in the right place: on the person rather than the system or on the design process.
This, then, explains why we should discuss the personas as real people whenever possible. The more we consider that they were created, the greater the risk that we slip into the wrong stance, thinking of the designers or the system rather than the persona. This is certainly true as we reference personas with stakeholders, but also with ourselves as we design. The more we ignore the designer behind the curtain, the more fully we adopt the intentional stance and create the best products possible.
Gallagher, H. L. and Frith, C. D. 2003: "Functional imaging of 'theory of mind'", Trends in Cognitive Sciences, nr. 7: 77-83