24 November 2010

All users scan!

So runs the common wisdom and it has done for a few years. If my memory serves me well, this notion was first widely accepted back in the 1990s (thank you Mr Nielsen).

But I did some testing a while ago and something strange happened. After dealing with the usual university-aged, I-can't-stop-fiddling-with-the-mouse younger people raced through the screens as if their lives depended upon it, the next participant was an accountant nearing retirement. He was well-established and well-respected in his town and used the web a lot to access information about his customers. He said (and I had every reason to believe him) that he had used the web for many years, probably longer than myself so keen was he on technology's ability to improve work.

But then the strange thing happened. I told him what I wanted him to do and opened up the first screen. After sitting there with poised for action, he took the screen in a glance and then sat back. Right hand off the mouse, left hand off the keyboard, arms folded. And he carefully read - repeat, read - his way through the content shown on screen. When finished, he then reached out for the mouse and slowly but carefully moved the screen to view the remaining couple of lines. Once he had set up his screen, his hand was straight off the mouse again. And then he read, slowly and carefully again. He asked a question about the grammar in one sentence, a perfectly valid one too.

And after that, I had a few more users reading carefully instead of scanning quickly. I quickly introduced a new element into the study. At the end, I asked them questions about the first page's content and found recall to be quite good. These readers really were reading and taking the information in.

So do users scan?

Yes and no. Some do and some don't. But even those who scan lots will sometimes seem to slow down and read; whereas others who read will skip through text.

But what lies at the bottom of all this? What makes us read or scan?

I have no formal testing behind this but I would suggest three things:

1) User's disposition: some, like our esteemed accountant, seemed naturally disposed towards reading. Others want to hurry up.
2) Task importance: is this just finding out a trivial piece of information? Or is it making a large financial transaction? The more important a user thinks their task is, the more likely it is that they will read rather than scan.
3) Relevance: using superficial cues (like the title or summary), users will make quick conclusions about the relevance of text to what a user wants to do. If text seems to be completely irrelevant, they are more likely to ignore it.

These are only from observation so treat them more as hypothesis generation than anything firm. However, I do think they are good; but evidence trumps all.

And how does this affect design?

1) Remember than not all users scan. Some have strategies to read thoroughly through web text content.
2) If a task is important and can have serious consequences, tell users. Conversely, if it's trivial and can easily be reversed, tell them that too (but then, where do you stop?)
3) Help users to make great relevant decisions up quickly. Try to use very descriptive titles rather than ones which might mislead.

Either way, it's wrong to say, "Users only scan".

19 November 2010

Statistics in UX

I thought of doing a short series on statistics in this page after a recent series of posts on the IxDA discussion list where there was some confusion about what qualitative data is. Stats offers a tremendous amount of return on the time needed to learn it.

Let's start at the beginning. Data types, and there are only three types of data. This is fundamental to good stats practice because the type of data define how analysis is done.
  1. Nominal (or categorical)
  2. Ordinal
  3. Interval
Nominal (categorical) data

These are data that have names / categories that exist independently of the other names. The classic example is sex / gender: people are (generally) male or female. Nominal data are firmly qualitative - it's impossible to argue that (for example) male is worth twice female or vice versa without being nonsensical or arbitrary.

Other examples include occupation (with exceptions, see ordinal below), nationality, brand of coffee preference, and pass/fail. A UX designer is not worth 2 accountants.

Ordinal data

These are categories that have an order in them. The classic example is a Likert scale or Likert-type scale. So if you issued a survey and one question was, "What do you think of this website?" and the answers were "superb!", "good", "don't care", "dislike it", and "hate it", then there is an order of feeling that makes the resulting data ordinal. If the responses were just "like" and "dislike", then we're back to nominal data.

Some nominal data can be ordinal but this depends entirely upon the measurement scale. The occupation exception above could be ordinal if the question was something like, "How senior are you?" and the measurements were "junior", "mid-level" and "senior". These have an order to them in which one is regarded as the highest, another the lowest and the third in-between.

Ordinal data are (generally) qualitative. Nunally argued that Likert scales with more than 11 points can be considered interval.

Interval data

These data are the classic quantitive data. Things like "reaction time in milliseconds", "amount of alcohol units consumed", "age in years", "number of errors during task" or "income in dollars".

Watch out for...

Sometimes, nominal data are subsumed into bands: for example, asking someone about their income might be done with 3 bands: "below $25,000", "between $25,001 and $80,000" and "above $80,000". The measurement is ordinal.

It is possible to change the type of data. An example is summarising interval data categories: so reaction time might instead be recorded as "fast" / "medium" / "slow" (ordinal) or "pass" / "fail" (nominal). This often affects what is referred to as the "granularity" of the data: interval are seen as fine-grained data, ordinal less so and nominal the lowest-grained.

But aren't some of these measures quantitive? For example, if I measure the gender of a sample and find 24 females and 24 males, these are quantitive?

No, the summary is just that: a summary. The underlying data are measured with 2 categories. Everything can be summarised by frequency but that doesn't make the data quantitive. The underlying data are qualitative.

Next, I will talk about how to deal with these data.

17 November 2010

Amazon UX failing with failure

I ordered a couple of books from Amazon through a third party supplier. However, after a month's wait, I've now been informed that one of the books I ordered cannot be supplied. Annoying but it happens.

However, I noticed that several other suppliers claim to have the book in stock. Great, I can order one. But I want to make sure that I don't order from the supplier I tried before as I will just get an 'out of stock' reply - and waiting one month to get this is not going to make me happy. I could order 2 or more copies which will guarantee that I order from a different supplier but I shouldn't have to do that.

Okay next step - let's examine my orders and see who the supplier is.

Nothing.

Just the other book that was delivered. No mention of the book that couldn't be supplied. Now I cannot find out who the supplier was.

However, gmail came to the rescue and I was able to find out who the supplier was so all is okay. But I had to augment Amazon with my own stored emails.

So my question is why does Amazon not keep a record of the actual order? It feels a little bit like 1984 with un-books rather than un-persons and a little bit unsettling, particularly as I run my own business and have to account for every order and purchase. I'm sure there is a business decision somewhere that makes sense internally to Amazon, but to this customers, it leaves the UX leaving a little (and only a little) to be desired.

'Live' information architecture

My most recent project (for 'LCP Automotive Components - the old site will be current for a short while yet) had an unusual set of requirements. I needed to present something other than wireframes, IA sitemaps, user journeys and the usual stuff. The client to whom the design would be communicated for a go-ahead (i.e., agree to spend the cash on roll-out) wants to see the system actually working.

This is a tall order - I have to prepare a 'working' but dummy system as the normal UX documentation will not be sufficient. This meant making something that actually worked.

I choose the open source route a) because the docs are actually quite good, b) customisation is easier, c) there are a range of platforms and d) it's zero cost. My choice was Joomla.

Before any of this, I went through the normal IA process of looking at the current IA, devolving the information therein, trying to anticipate future needs, and construct a new IA built around what information will go in there. This was summarised in a site-map and an information board (a spreadsheet detailing what information and interactions go where - quite a useful that I will illustrate later).

With an IA in place, I began to fill out the sections and categories (Joomla is limited in this aspect as it only has these 2 levels - plus the top level - of hierarchy, but for this site, it's fine). In each section, I've put down the bare minimum of information required on each page. Not proper content, but rather something like lines of text saying, "Link to product PDF information sheet here", "Tell user how to get information on this product recall" and so on. Almost like abstracted information, all ready for a copy writer to get their teeth into.

And it's working quite well. I've used the default design which we can settle on later - Joomla has a nice separation between content and style, so the main task for now is to prepare the IA on the live system to show what goes where. This is used with my client liaison who corrects my mistakes. Once all is ready, we can begin on content and detailed interaction design to be followed with the visual design.

It takes much longer to set up than a sitemap and the other IA documents, but once the content system is mastered, it's not too slow bearing in mind that the content is just abstracted summaries of what is supposed to be there.

The main thing to watch out for (as always) is the interaction design. Quite often, in top-down models of design (in which the IA is done first and then the IxD), I find the IxD presents problems which require the IA to be revised. My own work process is that I do a little IxD early on in the process to see what problems will arise and these are incorporated into the IA as early in the design as possible. This helps avoid a lot of problems.

16 November 2010

Google talking nonsense?

I was just doing some content-related stuff (some SEO things) for a current project. I had to find out strongly associated phrases for 'automotive components'. I know, I thought: I'll use Google's Wonder Wheel to illustrate the associated phrases.

So I entered the search into google.co.uk and requested a wonder wheel. However, one of the associated phrases is "automotive components holdings llc". Problem - llc is a US legal term. The British equivalent is "Ltd" (for limited liability company). This is very official - companies use "Limited" or "Ltd" in their name (Thought Into Design is actually Thought Into Design Ltd).

Ah - my mistake. I didn't filter in only pages from the UK - so I did.

Fantastic - now I get results only from the UK. But the wonder wheel remains the same. I'm not getting phrases associated by searches for British websites - rather worldwide. And anyting llc is of little use to a British company.

In UX terms, my problem is that restricting the search to UK only pages implies that the wonder wheel will product UK only content. In this case it isn't (unless British people are searching for "automotive components holdings llc" which I plausibly doubt).

Screenshot below.

14 November 2010

SEO and rankings?

So Assured yesterday appeared on the 24th page for a related keyword search. That's not bad considering it's only been operational for a couple of weeks (arguably less). This morning, it appeared on page 21 which is an increase of 3 pages in 12 hours. Quite surprising and things are only beginning. The better news is that Assured has appeared ahead of some quite long established businesses. For now, this is great - we're doing okay considering the long term goal is months away.

However, this is not enough: the position needs to be much higher which requires more SEO ninja-fu. However, I'm pleased with progress so far.

The key thing is really to get useful content up on the site: something that people want to hear about. Moving up and down the rankings this far down is relatively easy - the challenge is to get to the front page and move up to top.

12 November 2010

Google rankings

SEO is not my job but I have to pay attention to it.

However, I was pleased to note that my personal site (alansalmoni.com) is on the fifth page of results for 'user experience consultant' - and that's without playing any of the SEO game. I was quite pleased about this because there are lots of other sites.

Being a good net citizen runs parallel to recommended techniques from the search engines which explains the relative success, but I'm wondering if a more dedicated approach to SEO will pay off in terms of ranking?



Assured: The process

How is Assured being made?

First of all was eliciting requirements. The main objective was the most important: getting people to use the website and sign up for insurance advice. From this, all else derived.

From my experience, these one line 'orders' are very useful because they carry the essence of what should be done. When working on projects, sometimes it's easy to get lost in details. It's important to keep dragging back to this 'order' to see if what is being done contributes towards it or not.

Resources: low, pretty much on my own here.

Tasks: Design and develop the site.

Requirements: don't turn users away! This means supporting lots of older browsers so that pretty much any customer can use the site. Of course this means lots of technical requirements in terms of supporting lots of browsers. The full list is:

  • Firefox
  • Opera
  • Chrome
  • Safari
  • IE8
  • IE7
  • IE6
  • IE5.5
This is as close to 100% coverage as we can get (sorry users of Lynx / Links and early graphic browsers). However, IE5.5 and IE6.0 are the awkward ones (IE6 especially). Some CSS hacking was needed for rendering to be consistent.

The IA was next: quite simple indeed and for now there is no need for it to be complex: there is little dynamic content, no online quotes (due to resource constraints), and not much to communicate. This meant that a simple home page as chief with 5 pages in the second level hierarchy was all that was needed. Information was then partitioned throughout the site.

The initial template design was done by an external company, Downing Design, and their template required some modifications for information outside of the landing page. This was coded by hand and Chrome was used as the primary testbed. Firefox and Safari rendered pretty much the same as did Opera. IE was the usual kettle of fish but the CSS hacking got around most IE problems.

There are still outstanding tasks for the design - the main requirement was to create something that works and worry about the working well bit after.

So remaining to be done:

  • Improve the content mindful of SEO
  • Improve the form's design
  • Review, test and get feedback
My thoughts on the form were to have 'help' bubbles alongside each of the two fields: both would have a muted colour (e.g, light blue) when and only when the associated field has focus - in which case, the bubbles would be highlighted (e.g., orange). This requires some jquery code but I'm very comfortable with it.


Been a while...

It's been a while since I last posted anything here. In the meantime, I've been busy bringing up my first-born daughter, and working around the world in the field of user experience.

My most recent project was for Assured, a New Zealand based insurance brokerage and financial advice firm. The project was small in terms of design but has three tasks:

1) Online strategy - making Assured a first-class presence from nothing
2) Conversions - getting people to contact the company after encountering the website
3) Developing it

I'm quite happy to publish information about the design openly as none of it features trade secrets.

More in another article but for now, here is an IE8 screenshot.