17 January 2012

Challenge Analysis

I originally published this article on 1 November 2004.

Challenge Analysis


For the task of researching usability, the analyst has at their disposal an enormous range of tests.

From questionnaires and interviews, through protocol analysis, to performance data, OSM’s (and other forms of cognitive task analysis) one would expect all these things to be sufficient to measure everything that she wanted to know about a user.

But it still remains a difficult task to elucidate exactly what a user is thinking. Protocol analysis (Simon and Eriksson, 1982), or PA as I shall call it here, is also known as the “think aloud” analysis, for the user is sat in front of a system and asked to operate it while “thinking out aloud". Ideally, the user gets so used to doing this, that their cognition becomes easier to access. However, although it is a fantastic method, it does have drawbacks: in depends upon the user being a good verbaliser, and it can be slow to test a number of people. For these reasons, many tests only measure a few people. Other measures like questionnaires can be simple to issue and get completed, but the grain of analysis is often coarse.

However, one possibly new method occurred to me today, as an extension of protocol analysis, if you will. Maybe it’s time for analysts to get proactive!

Challenge Analysis.

The role of the analyst during PA is to remain quiet and let the user speak (with occasional prompts to remind them to verbalise their thoughts as much as they can). This is very much in the tradition of anthropological investigation, where the investigator aims to influence the subject of study as little as possible.

However, scientific experiment is all about intervention: without this, we would have no such thing as an experiment, rather we would only have observational studies. Clearly intervention is allowed, but it must be as tightly controlled as possible before it could be considered a worthwhile part of investigation.

My proposal is challenge analysis. This is very much like a PA, but it allows the analyst to intervene and ask questions of the user directly. While this is commonly done after PA using an interview technique, the user will often have forgotten the salient factors behind their decisions or behaviour. Interrupting them during the analysis and asking them to justify their actions could therefore be ruinous to a study if done incorrectly.

However, if done properly, a good challenge could extend the utility of PA somewhat.

Here’s an example. A user is writing a document using a text editor or a word processor. The user mentions that they need to move a block of text, and to do so they highlight it, copy it, move the cursor to the destination, paste it, move back to the original position, highlight the text again, and then delete it. In terms of actions, this takes longer than using the “cut” mechanism.

Observing this, I (the analyst) interrupt their work and ask them why they made this decision. If I am not satisfied, I can (diplomatically) illustrate any problems with their reasons and ask them again to justify what they are doing / what they were thinking, and what made them make that decision in the first place.

The disadvantages are many though.


  • It could create a confrontational atmosphere between the user and the analyst;
  • It might interrupt the user during the middle of a task causing them to alter their behaviour in a way they normally would not have;
  • It could provide the user with a new strategy which they will use in the future instead of their old strategy.
  • It stops the study being an observation study, and the analysts own personality may become too obvious;
  • The analyst would have to decide where and when to challenge (and when not to) - more experimenter bias;


However, a challenge analysis would also offer some advantages:


  • The user’s response to a challenge will more likely be accurate than that achieved from a post-PA interview;
  • Interesting or illustrative user behaviour that occurs during the study is less likely to be missed by the analyst;
  • Because the analyst may gain more insight into the users behaviour, the user may too - and come away from the study knowing more than when they arrived;

Clearly this is a very dangerous method to use, but with careful thought and good skills, both analyst and user may emerge from the experience richer in knowledge.

No comments: