17 January 2012

Is “good enough” good enough?

This article was first published on 23 May 2005. This article is a view from the trenches way back in 2004-2005 and paints a pessimistic view. Since thing, things have changed to the point where effective UX is now seen as a competitive advantage; but lower than necessary emphasis on design still leads to the same design disasters.

Is “good enough” good enough?


In 2004, I gave a brief talk about usability research methods at a conference. In the audience was Jef Raskin, the man who played a significant role in the development of the MacIntosh computer. He asked me a question (apologies, I forget the exact wording) about how interface developers should insist on the best possible interface as an ethical standard. I replied that it was important, though the importance depends upon the task: airplane cockpits should have unimpeachable standards, whereas problems in (say) word processors may be tolerated more easily because there is less chance of the loss of life or health.

However, many large companies employ user interface people and yet produce interfaces that are substandard. For example, Microsoft and Apple have both sometimes ignored their user interface guidelines when releasing their own applications. Disregarding the effect that it encourages third parties to do the same, I often wonder at what level is a substandard interface just “good enough” to be considered acceptable?

Consider: I find many things about many operating systems to be substandard, and yet the product continues to be widely used and even dominant in its field (Microsoft Office, I’m looking at you!). As an example, I was training on MS Project last week and there is only one undo. For an application whose results can be completely altered with a single, simple change, it seems difficult to understand that only one level of undo was available to the user. This was the latest version of Project that we were using, and this is the year 2005.

Despite this, MS Project appears to be the de facto standard, though this is not a good enough reason to tolerate potential problems like this in my view. Even with such glaring usability omissions, how does something like MS Project get to be number 1? Why do people continue to hand over money for applications with substandard and potentially harmful interfaces? There is much to say about MS being a large “reliable” name and safe for tech-wary bosses to throw their money to, but many users tolerate Project and indeed often extoll its virtues.

At some point, the interface must have a minimum standard of acceptability, and this standard is inevitably going to be less than perfect. I guess I’m curious as to how this minimum standard is going to be defined, and I would guess that a number of factors lie behind it.

  1. Existing prevelance - probably the most strong factor. If it’s widely perceived as the number one, people will buy and use it. The stronger the prevelance, the more likely it is to spiral into a self-fulfilling prophesy.
  2. Backing - the backing of a company like Microsoft is important for many managers. Some will not consider a different companies software without first checking whether MS has a version already.
  3. Third party support - the provision of support by third parties (for example, in training needs) is very important and possibly the biggest drawback to open source software: how does a company organise training for OpenOffice.org compared to MS Office? This is no criticism of open source, but rather a reflection of how third parties support the safest bet.
  4. Interference - this is the controversial usability aspect. The task is important enough to be completed, and drawbacks in the interface are not sufficiently bad to deter a user (though they might well be frustrated and annoyed by it!).

It seems then that people are willing to tolerate applications that are less than efficient, and even when they are not the most efficient solution in the market. At some level, the returns on using the application are seen to outweigh the perceived problems. Sadly for the field HCI and usability, this means that effort might be best placed in trying to develop an interface that is not perfect, but rather just “good enough". This will depend significantly upon the company and how much market share its product has.

I feel saddened somewhat by this: that people really do prefer to satisfice than satisfy when it comes to interface considerations. Given that UI people are the first to go in cutbacks, my recommendation to anyone interested in the field is not to bother. Train yourself as a plumber: people always need clean water.

No comments: