17 January 2012

So, you’re short of cash and you need to test…

This article was originally published on 5 December 2004.

So, you’re short of cash and you need to test…


Usability analysis is the most expensive thing in the world, right? My own consultancy, Milui doesn’t come free (unless you’re a charity or similar), and the knowledge is obvious, isn’t it? Surely just applying a little common sense is enough to understand what usability analysis does? After all, programmers are humans too and they have the same insight into the human condition that everyone else does. Ergo, a programmer is ideally placed to develop interfaces.

However, in human-computer interaction, many findings are counter-intuitive. Take for example one experiment in my thesis. I gave people a list of short summaries and asked them to guess whether they answered a particular query. Some summaries had the title of the document, others just a couple of lines of text (like the snippets that Google shows you), and others had both. It’s obvious that the longer pieces of information would give the most correct judgements, isn’t it - there is more information there to make a decision, and it’s all from the document, so giving more information makes a decision more likely to be correct.

Except that it isn’t the case. We found that the snippets often provided misleading information by giving people the impression that the document was more related to the query than it actually was (the snippets show text with the queries keywords embedded). Curiously, titles were the best basis for making a decision, and people seemed to rely upon them even if they preferred having the misleading information. People used titles as the basis of their decision even though they weren’t aware.

This illustrates two things. First of all, it shows that HCI and usability research is there to show how wrong “common sense” can be. If common sense were all that were needed, there would be no bad interfaces because the best interface would be obvious to us.

Secondly, interface work is often not about pretty pictures and deciding where to put a button. Most of the hard work concerns how people think about things; how they process information; and how this information leads them to make the decisions they do. Most of this stuff is not explicit, particularly when dealing with expert users (who have masses of information that is “hidden” from view), and the analysts job is to get this information out; to really understand how it is that people operate.

But this doesn’t have to be expensive. There are many methods that can be used in usability analysis.

(as a side note, there appears to be a distinction between usability analysis and HCI analysis. The former relies on testing something to see how it works and how to improve it - papering over the cracks if you will - whereas HCI analysis begins before the interface is designed so that the whole way an application works is done properly from the start.)


Observation studies are probably the most valuable things. Just sit a friend down in front of your interface and tell them to do something. The chances are that they will be frustrated and confused by many things that you never even noticed before. It can be a humbling process for any programmer/designer, but if you really want to make a better interface, this is the way to go.

Beware that you don’t use people who will just say “nice” to everything. This might be more harmful that testing nobody for you get the impression that everything is okay when it is not. Try to choose pedantic and fussy people if possible.


Often performed after an observation study, interviews can be valuable to highlight some areas. Draw attention to your own faults as a designer, try to get your interviewee to talk freely. Thank them for their input and their time, and tell them that they have made a difference to your work.


These can be much abused, but they can be useful. Personally, I would recommend having some open-ended questions to allow the subjects to discuss things more freely than they would normally do with closed questions. Likert scales can also be useful (such as showing five boxes each with a statement: ask the subject to tick the appropriate one. A list of statements would be “I found this app easy to use", “I found this app had a few problems” and so on).

These can allow testers to get through lots of people in a short time. In addition, they can be administered on the web quite easily which means that users can be tested around the globe.


These can be done on paper, on computer, or any other suitable media. Prototypes don’t actually work, rather they just look like the real thing without any functionality.

Paper prototypes are very simple to do, but computer prototypes can be more useful. There may be questions as to how well a paper prototypes relates to a computer application: people may deal with both in different ways, so a computer prototypes of an application would appear to resolve this.

Visual RAD environments such as Visual Basic, HyperCard, Delphi/Kylix and so on are useful for this: a designer can generate a simple (non-working) interface in very little time, and modify it immediately if needs be.

Other things such as Flash or SVG can be extremely useful there: Because both can communicate through the web browser, computer prototypes can be administered across the Internet. Open source projects can leverage their users to help in interface design with this method - stick up a webpage with a demo and ask people to “have a go” at it.

Think aloud

Verbal protocol analysis studies are commonly known as “think aloud” studies. Here, the subject is encouraged to verbalise their thoughts as clearly as possible. Obviously, some way of recording what is said is useful, but the analyst should also pay attention in real time, as this method is often followed by an interview.

Subjects will probably need a little training with this method, but a few simple exercises often suffices, like getting them to add a list of integers. The important thing is to get the subjects to verbalise things, and the analyst should encourage this if the subject becomes silent. However, the analyst should ensure that they don’t prompt the subject into giving responses that he or she wants to hear. This defeats the object of protocol analysis.

Cognitive Walkthough

This is a nice cheap method because it doesn’t necessarily need subjects, for with this, the analyst can set themselves up as a subject. First of all, the analyst decides that they are going to test a particular aspect of the interface (say “how can I move blocks of text around in this text editor"), and then the analyst does this task, almost pretending that they have the knowledge of their subjects. To be fully rigorous, this method needs to have a good understanding of the user base (ie, how knowledgeable they are, what they will expect, and so on), and as a method it is susceptible to analyst bias. Again, a trained, experienced and above all disinterested analyst is worth their money here. However, it is a simple method that can work extremely well in the hands of a competent tester.


Above are outlined several methods which can be used to better understand how people use things. Unless they are used rigorously by a trained analyst, they are unlikely to provide you with great insights into cognition and behaviour, but they can highlight specific problems with interfaces.

However, be cautious! Many studies have shown that testing only a few people will not illustrate all problems. Major issues may be observed, but despite the advice of some gurus, problems remain with both the quality of the analysts and the quality of the people being tested.

Further Links:

Jacob Nielsen on Guerilla HCI - Nielsen is probably the most famous face of usability, and this article gives an introduction to cheap usability methods.

An opposing view from User Focus (UK)

A general article at SitePoint.

No comments: