17 December 2012

Wireframing with LibreOffice

Given the mass of excellent wireframing tools, does FOSS software offer a cheap and ethical alternative? In this article, LibreOffice's Draw component gets tested for real work.

I'll admit it: I'm not sure whether I should be using OpenOffice.org, LibreOffice or whatever. For this, I used LibreOffice.

My relationship with this monster package goes back some time. I recall using Star Office when it really was Star Office; and build 638 of the newly released OpenOffice as it was back then. I think this was late nineties / early 2000s or thereabouts. Either way, it saved me when a small file I'd saved with MS Word at work refused to open and caused Word to crash on many machines. I copied the file onto floppy and opened it successfully with OpenOffice.org, saved it under a new name and it then opened fine under Word. Since then, I've been a bit skeptical when people criticise it for it's lack of ability to open MS Office files when MS Office can have a hard time.

But how does it do for basic Wireframing? I'm thinking about the static stuff here, just plain documentation, waterfall / throw-it-over-the-wall style rather than something interactive.

So far, it's done well for me. I've done a reasonably complex project here at the Economist Intelligence Unit, and managed to create wireframes for some already new pages and some already existing pages (entry points into the new pages). All were done using the 'Draw' component.

The benefits are:

Easy export to PDF, HTML and SVG along with a range of other formats. The HTML and SVG is useful because it can be easily displayed in a browser for testing and research, particularly remotely.
Multiple pages handled easily - this alone is a step ahead of Illustrator particularly when trying to stitch several PDFs together: LO provided rather a booklet.
Custom page sizes - page sizes can be specified in pixels too which makes things easier.

Drawbacks:

It's still hard to design fluid layouts. Personally, I find HTML to be just the best thing for this rather than relying on a graphics program.
Exported PDFs don't look that good. I cannot put my finger on this just yet but something seems inferior quality.

19 September 2012

Inkscape for UX design

I know that suggesting an open source tool like Inkscape for a design where Illustrator is thoroughly embedded comes across somewhat like recommending a typewriter to a blogger for writing but having used it lately, I'm quite impressed.

This is a part of a series about UX design with open source tools but just a quick overview rather than a detailed review. There are many reviews out there so I'm just going to concentrate upon the things I found that might be of interest to other users.

Inkscape's interface is very different from Illustrator, and for many this is a failure. Despite us often suggesting new ways to do things, we UX designers can sometimes be stuck-in-the-mud when it comes to changing our own tools or workflows. Given that my first experience with vector graphics drawing programs was DrawPerfect (on DOS no less!), I can some experience with different interfaces so I'm not so phased by Inkscape.

What I did find useful was being able to export to SVG which could be made (with a little JQuery coding) quite interactive. I found that I was able to design a graphic, export it to SVG (Inkscape's native format) and show it directly in a web page. I could also add interactivity with hyperlinks, mouse overs and the like which made it not just a good tool for designing mockups, but also a good way to develop both interactive prototypes and even finished pages. One I did required maybe 20 lines of html code and a series of if... else if... statements to show something when a link was clicked.

Well, maybe not so much finished pages - layouts were distinctly un-fluid and un-responsive-designish; but they certainly worked well enough to display in the wild.

There are a few rough edges but it's still a surprisingly powerful little program. I'm sure that Illustrator priests will miss various features but I really like the idea of an open source alternative that uses an open web standard graphics format that can be shown directly in a browser and made as interactive as possible with little code.

08 September 2012

DuckDuckGo Sugar and Gold

Just in case you don't know, there's a search engine called Duck Duck Go (apologies to Gabriel and team if the spelling is incorrect!). I've been using it for a while now and even had a rap with the founder Gabriel Weinberg about this time last year (my Ph.D. thesis was on search engine usability).

One reason I liked it enormously was that it returned results with a very high precision. In search engine terms, this means that there were few non-relevant results. The other core measure is recall which is the number of sites returned and a third measure, accuracy, which is a function of recall and precision. This gave me quite a cheer.

The second reason I liked it is that my own personal sites do well in ranking: Searching for 'freelance user experience researcher' shows alansalmoni.com as number 1!

But something else happened that makes me want to spread the love. I was rushing up a design for a new resume (a cross between the traditional resume and an infographic - I have no idea if it will work well!) and I needed some filler text. As a part of my experience to rely upon open source software for design, I did the whole thing in Inkscape which worked out really well but the function that created random text wasn't working.

I went to duckduckgo and searched for lorem ipsum - and got a page of filler text in return!

This was a great time-saver for me and another reason to continue using Duck Duck Go.

02 September 2012

UI Interfaces

Land Rover Experience - User JourneyLand Rover Experience - WireframeLand Rover Experience - FinishedLand Rover Biosphere - User JourneyLand Rover Biosphere - WireframeLand Rover Biosphere - Finished page
Google Accounts_SucksSuggestions_00cSuggestions_01cSuggestions_02cSuggestions_03cSuggestions_04c
Suggestions_05cWorkflow 0,2ACW_ASEOTiD_comb4TiD_comb4Bcrowdsorters_header
CrowdSorters_maincrowdsorters_rippedass_01ass_02ass_03ass_04

UI Interfaces, a set on Flickr.

A regular update of my latest user interface designs. These come from various sources.

31 August 2012

The Web's visual language

Current interactions are far more complex than they used to be and will probably get even more tricky. However, the interaction language we use is finding it hard to cope.

This redesign I've been doing for Analytics SEO is coming on well but one thing about this and many previous designs has been nagging me.

When I first used the Web in 1994-5, interactions were simple. Links were (generally) blue and underlined and went to a new page. A button submitted a form. Form elements worked more-or-less as on the desktop.

But current interactions are far more complex. Take the live edit type interaction as seen in Basecamp. Here, an active element (most often text: in this screenshot, it's a date) shows an overlay when clicked on that allows the user to change the element. It's a very nice piece of interaction that helps do away with separate "edit" screens and server-side lags.

But this type of interaction differs from the classic click-on-a-link-and-open-new-page-in-this-window. It's all client-side so quick. From my experience researching non-technical users, some might miss this and instead opt to wait for the page to reload. Yes, this is possible - I've seen it in testing and it feels embarrassing for participants to wait only to be shown they got the interaction wrong even if it's not their fault.

But we do not communicate the different interactions to users. If I see a link, I have no way of telling whether a) this opens a new page in the current screen, b) this opens a new page in a new screen / tab, c) this opens an overlay, d) this sorts a table's column client-side, e) whatever else might be expected.

Most people seem to muddle through okay but, as I've found, some people just don't. This, I cannot help feeling, is a significant shortcoming of our field. We make these things. If otherwise intelligent people don't understand them, we have a degree of failure to our work.

30 August 2012

Cognitive modelling

Introduction: This article is one I wrote some time ago about modelling cognition and never released. It's incomplete but might be useful to spur thought and conversation. I suspect that it's more about mental models - my concept of cognitive models were more functional and evidence-based.

What is this and is it useful to practical usability work?

Yes it is useful indeed. Most current work in usability (as in psychology) focuses upon things that apply to entire populations. We look for a site that will appeal to our target users.

But this doesn't take into account the fact that there are individual differences within our target populations. This is obvious for large populations: a site selling air flights will have almost anyone as its target. In this case, it's clear that people will differ not just in the characteristics that will affect their performance in booking a flight, but also in other characteristics.

However, even relatively homogenous populations, say a group of dermatologists using a tool in to diagnose skin diseases, will also differ.

But despite this, there are things in common, the main one being that they have a human mind.

This is useful for practical research because it means that usability professionals can concentrate on certain things about the mind that are stable across target populations to ensure usability.

The problem then is that understanding these things is extremely difficult. One way of capturing how a mind works is by using cognitive modelling.

There are many ways of doing this: some are simple and some are incredibly complex. Here I will discuss mental models and how usability researchers can understand them. I will use these because they are simple to understand and investigate with low-tech methods.

One of the seminal papers in the field was by my PhD supervisor, Professor Stephen Payne. He investigated the mental models that people had of cashpoint, or ATM machines. This work helped designers to understand how to make them simple to use for a wide section of the population.

His investigation method was simply the interview. He discussed ATM operation with users and found that they recruited analogies to explain the machine's operation. Quite often, people would even use several analogies even though they conflicted with others. I haven't read the data to this study, but let's take a hypothetical example.

Someone might view an ATM as a sort of one-armed bandit where they have to do some things and out pops money! Additionally, they may have thought of the ATM as a monster that "eats" cards if something is wrong.

For researchers, take careful note of the words that users provide when describing something. The key words for both of the above analogies would be something like "payout" (for the one-armed bandit) and "eat" for the monster. If you notice interesting terms like these being used, carefully try to question them closer. Be careful because if you make the subject feel stupid or self-conscious, they may clam up. Try to get across that you share the analogy but ask for further explanations.

Once you have the analogies, you can understand their mental models better. From these, you can work out some cognitive modelling by remembering that with experience, explanations of machines change from being parochial to something closer to the actual operation (in general: some people are fond of their analogies and don't like to change them).

29 August 2012

Be good, always help

In some work for the Analytics SEO re-design, one of colleagues liked the idea of giving users some information from a drop down.

Okay, let's step back a bit. You know that on many sites in the top (and often right) there are links to various user account functions: things like the user name and avatar, link to my account, log / sign out, and so on. Well, DropBox put all of these into an overlay which is shown when the user clicks on their name. We had the same idea but there was a strong business requirement to make these links omnipresent. This meant no overlay because it didn't have any meaningful content.

However, my wireframe showed extra information: the user's full name and email address, how many sites they have, a link to the upgrade channel, and the links to account and sign out (figure 1). My colleague liked this and suggested that we offered more information about the user's account.

After some discussion, it was clear that our numbers make scaling hard. Some users will have (e.g.) a few thousand monitored keywords; others will have millions or tens of millions (I kid you not - we're enterprise software).

So the design had to clearly communicate the contents for a range of numbers - from single digits to 8 or even 9; and all contained within an overlay.

I also thought that each number could communicate a message to users so that they're warned if they're over their limits (and paying more than they expect) and also if they have unused capacity. Figure 2 shows the plan as it is right now. It needs to be validated against our business criteria (by business, tech and testing), and (ideally!) tested against users. We don't have a lot of resources, so it's guerilla testing for us!

The benefits of this design are:


  • Users can get a quick overview of their plan and capacity - this information is otherwise hidden away somewhere in the 'settings'.
  • Users are clearly warned if they're over their plan's limits (note: this isn't the only warning I have designed)
  • Users can see if they need to upgrade or downgrade
  • Users have a simple path to change their plan easily if needed
  • Users can see what spare capacity they have before they incur further cost


Drawbacks:

  • Users might not find it immediately (it's in an overlay) and not omnipresent
  • It might encourage users to adopt a cheaper plan! Actually, this is okay with us. We'd rather have a happy customer paying a little amount for life rather than a big spend from someone who is unhappy


But from a quick eyeball, it seems to be quite nice and useful. Users are the true arbiters of that in UX so we have to wait and see... ;-)

I wonder if I'll be able to work any of this into the redesign of my main sites, Thought Into Design, and my own UX portfolio page.

Keyword Suggestions tool for SEO and market research

Part of my role at Analytics SEO is doing research. It sounds like an excuse to goof off and spend time messing around with natural language processing techniques, search engines and the like. And to be fair it is!

One of the first pieces of research I did was looking into the keyword suggestions offered by search engines. You know when you type something in and a drop-down appears below offering suggestions as to what your search phrase might be?

I wrote a little tool in Python to get these things from Google and Bing and we've recently released a tool for customers to get suggestions. I've had a bash with Google's Webmaster Tools API and found the suggestions there to be, well, shall we say a little odd and irrelevant to the websites I was looking for. In contrast, this new tool is awesome and offers some great keywords that could really spark a campaign.

The best bit is that our keyword suggestions can also be good for market research; and being a suggestion implies that a keyphrase is searched for.

In case you're curious, the main tool for developing this research was Python. The final version is hardened PHP because my code was research-quality only and not made for enterprise-quality use.

26 August 2012

Pencil by Evolus - Review of a Wireframing Tool

Most UX designers I know are always questing for the next best wireframing tool. It's not like Balsamiq, Fireworks, Axure of whatever are bad; but UX designers, as a whole, spend an awful lot of time creating wireframes. Programmers: it's like the time you invest in learning a great editor or IDE. Yeah, the real work is done in our heads but creating an electronic representation is a core part of augmenting our work.

I tried Evolus' Pencil a while ago and liked it but saw that it had significant limitations at the time. I've had reason to revisit it recently as a part of my new job at Analytics SEO, and put it to work on a new project of redefining our core payment flow.

Pencil itself is open source and based around XUL, Mozilla's UI interface toolkit. This lies behind Firefox and other Mozilla products so it's been well tested in the wild. But unlike programming where open source tools have a strong position, this is not true in the UX world where proprietary software and standards are common. The major exceptions are probably Fireworks (which saves to PNG, an open standard) and anything saving PDFs. This has caused problems in real projects, like when half of my team had upgraded to 6.5 of Axure, the other half were on 6.0 and the client was on 5.5. Yes, there were ways around this, but the different standards for each version made extra for me.

Pencil can be used in one of two different ways: either a downloadable executable (the latest version is only for Windows and Mac OS X) or run it within Firefox. The latter way is how to run the latest version on Linux. This review comes from the standalone version on Windows and Mac OS X and the Firefox-based one on Linux.

Pencil begins with a nice open frame. It doesn't tell you a lot or lead you in too much. This can be good if a designer is willing to explore but might leave less adventurous / more pressed ones less impressed.

Pencil uses the concept of paper and web: the background is given in various paper sizes and common web page sizes including custom sizes which can be defined in pixels. Having both is useful and after all, why be coy about defining a canvas in pixels when we're doing web design?

The canvas on a page can have a definable grid but this grid will not sit on top of objects but always under. Some might find that limiting: it depends. Personally, I like it above if possible but it's not a killer.

Each piece of work can handle many pages. Each page can be differently sized. The file format is "Pencil Documents" and I'm unsure if it's an open standard. I would guess so because it's XML of some kind. Each page can be named and it's a reasonable way to document all of a workflow's wireframes with each page in a separate tab.

Pencil comes with a useful range of shapes including stuff like flowcharts and GUI widgets. You're all set to go on basic projects if you wish. Pencil, however, does have a community of people willing to contribute new shape libraries and some are quite useful. These include 'Sketchy GUI' (competition for Balsamiq) and some rather useful touchscreen images to communicate different forms of hand interaction.

Shapes can be defined by mouse or precisely including width, height, x-position, y-position and angle. Angle is useful, particularly with text - are you listening Axure? Only joking - I know that rotate text is available in 5.5 but it felt like a long wait to get here.

Everyday operation

I found it fairly easy to set up a base template for my company's interface. This would be used for minor changes, though we have a complete redesign of the IA under way right now which might well change things. This template can be used easily as the base for a load of other pages.

There were a couple of problems. I found that on the stand-alone Windows version at least, clicking on a Pencil file to open it showed a completely empty document. This was worrying because I'd spent hours creating it. Looking at it using a text editor showed there to be information alright. The solution was to open Pencil then go through the menu: File -> Open... and so on. The file opened fine.

One other oddity is that when you have several pages in a document, selecting all (either via the menu or control-A) selects all elements in all pages. This doesn't work in line with other systems. If you open a spreadsheet and enter data into several worksheets, using 'Select all' only selects everything on the current worksheet.

The grid works but I miss the fine control of Adobe's Illustrator. When I first used Pencil, I thought it was a minor point but it wasn't until it was missing that I realised how much I rely upon a well-formed grid to get layout pixel-perfect.

These are some issues that occurred to me when using Pencil for a large site re-design of Analytics SEO's main application. The results themselves, I'm quite pleased with: the process wasn't painless and a few issues were outstanding. They are not insurmountable and could be tackled.


A grouped object not showing the arrange option in the contextual menu
  1. I could not find a way to arrange grouped objects. In order to change the arrangement, I had to ungroup, re-arrange the lot, then re-group. Curiously, if I select a grouped object and something else, I can then change the arrangement. My ideal would be to just allow arrangement like any other object. 
  2. Direct opening often shows an empty document. This is worrying because I thought it had deleted hours of work when I went back to it. As it was, opening from within the program got everything back okay.
  3. Select All selected all objects on all pages. Be mindful when you're deleting a page or even just moving all objects in one page - you might be moving a lot more. The normal operation for select all is just for one page (e.g., select all in a tabbed browser selects only the current tab's content, not all content in all tabs; Speadsheets only select the current worksheet not all content in all worksheets).
  4. Treats the enter key  as finishing editing a widget which makes entering new rows into a table tricky. Balsamiq has a nice solution here of using the enter key to create a new line and uses a click outside to complete editing. I understand this is somewhat inconsistent when compared with how Balsamiq treats text input / text field where the enter key completes editing but at least I can enter new lines easily where possible.
  5. Cursor keys finish editing rather than moving the cursor left to right sometimes. It's possible to re-position using the mouse which is less precise, slower and more difficult if the text is partly obscured by the editing widget. I'd like to use the cursor keys to navigate the content rather than having to position with the mouse.
  6. Text partially covered by a formatting widget. Using arrow keys doesn't navigate the content either: it ends editing.
  7. Editing widget partially covers text being edited. This was only in the Firefox-based version. I'm not sure if it's a XUL issue or not.
  8. Objects sometimes refuse to re-size. Closing and reloading the file doesn't work either. My solution is to copy and delete the old one but this often requires re-arranging the object's order.
  9. When resizing or replacing an object using the 'Location and size' control, tab moves between graphical objects rather than between fields. It also doesn't submit the value. Enter does submit but doesn't move fields. Ideally, this widget would be treated like a form: tab goes between the fields and submits changed values, enter just enters a value and closes the widget unless it's permanently displayed.
  10. The position and size fields are very useful to have on-screen all the time: Some Adobe products let users display them permanently (e.g., Fireworks) and this would be a really useful feature.
  11. Moving an object's arrangement seems to require using a context-sensitive sub-menu: no keyboard shortcut is apparent. This is very slow when an object needs to be moved lots. A quick keystroke to move up / down would be much quicker for changing arrangement.


Overall, I've enjoyed using Pencil and it shows real promise, particularly as a competitor to Adobe's Fireworks or Axure. Open Source has yet to come up with something as capable as Fireworks (and no, it's not just a weakened Photoshop - you can create states and export to clickable prototypes). This alone makes it very exciting for me. The outstanding issues, however, make me feel that it's not quite ready yet for prime-time wireframing though it could well be there soon.

11 August 2012

User Experience with Open Source Tools


This must be my third week using open source tools for user experience and, to be honest, the experience has been okay. The tools are mostly extremely capable - the most impressive tools were for statistics (R and PSPP) and Pencil, a rapid wireframing program.

My work has lately been data analysis: I've been comparing a tool from my company against competitors. Coming from a background in psychology with a post-doc in education, the best way to investigate was, to me, to use Cronbach's Alpha, a test of reliability (or internal consistency if you prefer). In short, this takes a few different ways of measuring something and asks, "Are they measuring the same thing?" If they are, then the scores from our tool and those of our competitors should vary by the same amount at the same places, and indeed we found that they did.

If anyone's curious why a UX designer is doing statistical analysis, well, it is one of our core skills. We have to be numerate to understand data like analytics, experiments and the like. To me, it's probably a more important skill than drawing which is a contentious point. I would, however, prefer to design something that works well and looks poor rather than vice versa. It's fairly easy to make something look okay but the converse - well think, "lipstick on a pig".

Using PSPP (an open source version of SPSS) was quite fun, particularly as the GUI is working quite well. It was a fairly simple task to run up the test and check the results which showed that between us, it was the "industry standard" who were the odd ones out. My next bit of work is checking why they are the outlier which is a whole new article.

Other work included creating icons with Inkscape. These had to be crunched to 16x16 PNGs, and they turned out alright. I cannot imagine using Illustrator and doing any better and it felt like I was the biggest limitation of quality output which is the sign of a good tool.

The GIMP has problems. It's layer model differs enough from the Adobe Suite (I talk as if there is a consistent model amongst Adobe's tools) and I was dismayed to have to learn another way of doing things. My ideal is the Fireworks model which I get pretty well even though I've used Photoshop for many more years.

Pencil is a great wireframing tool. It can do the Balsamiq variety cartoony-type wireframes but can also handle higher fidelity ones. Appearing from version 1.3 upwards, the grid makes a big difference to layout and the results are as good as most things I've used. The interface feels less messy than Fireworks though there are fewer effects to polish things off. I know some designer will get hot under the collar and insist that wireframes are throw-away documents and the more scratchy and crappy-looking, the better; and this is so for most occasions. But there are times when higher-fidelity mock-ups are necessary.

I actually hold out hope for this tool and will consider making a donation to it in the hope that it continues. I wonder if I can convince my CEO to make a company donation... ;-)

24 May 2012

Into SEO land!

I was approached recently for a great job. After talking with Laurence O'Toole, the founder of Analytics SEO, I decided to take a full time position there.

My role is mostly UX research and design and my primary task will be working on their SEO software (research, testing and design) but there is also the chance of some incredibly cool stuff like big data analysis and natural language processing - and I'm hoping that Roistr will have its input there. Certainly it seems like a great way for people to test linked content with their website's content to see if they're similar enough - or even to make a massive semantic 'map' to discover related website content.

So far, we have lots planned and a lot of it already in operation. More news here.

22 January 2012

UX Design Planning


UX Design Planning

Summary

When designing from a UX perspective, I find it helpful to plan all my work in a spreadsheet. This forms a useful reference throughout design and help ensure that requirements are not missing.

The WorkPlan for Design Planning 

When planning a design, UX designers need to take into account a large number of factors. Obviously, we have user requirements and these are often foremost in our minds because it's what we're good at. However, we also need to take into account business and technical requirements: the former because any design has to meet the needs of a business; the latter because it has to be built.

But this can be a lot of work. Mentally juggling all these requirements (which can be contradictory or at least require problem solving) is hard for complex projects. It's easy to forget a single line in a 90-page business requirements document that turns out to be crucial.

So I use a spreadsheet to plan. This workplan is not exciting by any means and certainly not pretty, but it is effective. It's not used to communicate to stakeholders except possibly developers who, on the whole, quite like how explicitly it communicates.

For smaller jobs, I often stick to the simple user journey but limitations of space can make a graphic version quite hard to follow. Using a spreadsheet allows a quick display of information and (when there are time constraints) it's very quick to update it.

The final advantage is that it puts off the process of putting pen to paper until later in the process. Sketching is so much fun that it's tempting to do it early (as I mentioned before, I often do it early as a reality check / interface stressor). I have found myself to be more productive if I can make sure I have all the information requirements (user, business, technical) in place and validated - and a single piece of work containing the lot.

High Level

Like a lot of UX, we begin at the highest level (note: I often try to work at multiple levels simultaneously but that's another article) by noting down the high level structure. This equates roughly to a sitemap.

Each separate area of function / content is laid out in a separate worksheet. Large projects can quickly get very unwieldy but it's easier than referring to work docs, wikis etc.

It's possible to develop the sitemap after this, before this or in tandem with this part. The important thing is to stick with whatever the designer feels is best and the first established element should guide the rest.

Function Level

Within each high level area of function / content, I then note the functionality required of this area. This is again at a higher level than we'll be working to later.

The important content here is a breakdown of the unit of function. Included are the overall aims (user goals), pages that link in and links out, and the fundamental information exchanges. It's possible to begin the information exchanges at a high level and work down (i.e., begin with "Enter user's address" and end up with "User's house number / name", "user's street", "user's postcode / zipcode" and so on.

Specific requirements can be noted here: things like date fields that cannot display after the current date for retrieving historic records.

This plan is a work in progress and will change as the rest of the site or application is developed.

This stage equates somewhat to user journeys: it describes the user journey in detail.

Content Level

With the function in place, we can begin to plan content. Things like necessary legal conditions (if there are terms and conditions, links to these can be included in the links out section if necessary), instructions, and pagination for workflows can be included here.


So now we have a breakdown of a page with all the necessary functionality and information. This defines the information constraints within a page and gives designers a very quick overview of all the requirements that the page has to meet.

The usual process is to get these validated somewhere (this might be with stakeholders, other members of the UX team or whatever the organisation deems best) and then wireframe design fun can begin.

18 January 2012

Statistics in UX - Part II

This is the second part of my article on statistics in user experience. Part I is here and discusses types of data. In this article, we begin to see what to do with it.

Descriptive statistics

This is essential to analysing data and represents what some people see as statistics.

Descriptive statistics just describe the data. There are various measurement types that you can use (depending upon what research questions you have) to understand what happened when you took the data.

But first think back to the last article: we discussed nominal, ordinal and interval data. The types of data you have will shape the kind of descriptive statistics you need. There are good reasons behind this and statistics is far from being an academic discipline. It's based entirely upon understanding the world around us.

Measures of central tendency.

These measure tell you where data tends to be centred. The catch-all word "average" covers a number of measures which you have most likely heard of at some point. Each of these measures has assumptions which have to be met before the test can be used.

Mean, or more correctly, the arithmetic mean is a measure used for interval data. It's fairly meaningless in terms of ordinal data and certainly nominal data. Imagine if you did a survey with 100 respondents and 60 of them were women. Can you say that the mean sex of respondents was 1.2?

The mean is calculated as the sum of all values divided by the number of values. So a list of 4 response times [234ms, 265ms, 289ms, 198ms] would have a mean of:

(234 + 265 + 289 + 198) / 4

= 986 / 4

= 246.5.

The mean seems to make more sense for ordinal scales like Likert scales but there is a danger. The mean works well only when there is a normal distribution of data (a bell-curve). Even then, it can be hard to make sense of the data. A better measure to use is the median.

The median is another measure of central tendency that produces the middle value. Imagine if we got all values, sorted them into an order and found the central point. If there is an odd number of values, then 1 value is the median. If there is an even number, then the mean of the 2 central values is the median.

Say we have a 5 point Likert type scale given to 10 people. Responses are [1,2,4,3,5,3,2,1,2,1,2]. Sorted, this list becomes [1,1,1,2,2,2,2,3,3,4,5]. The central values are 2 and 2 ([1,1,1,2,2,2,2,3,3,4,5]) and the mean of [2, 2] is 2! We can say that the median response is 2.

Compare this with the mean which is 2.6 - not vastly different but different enough.

The media is useful when the mean cannot be used: remember the assumption of a normal distribution? Well if you want to calculate the mean of a nation's salary, you will probably not get a normal distribution: rather, it will be skewed to the left (a negative skew) because most people will be earning little and only a handful raking in the millions that we hear about (very extreme and uncommon values are called outliers). Reporting the mean will be meaningless.

The median protects somewhat against skewed data. By taking the central point, a figure more representative is found.

The mode is controversial. Some statisticians say it is a measure of central tendency; others say it is not. The mode is the most commonly occurring value. For interval data (like response times), the mode doesn't make much sense. There may well be no mode because each response time happened only once. But for Likert scale, it makes sense: with the above data, we can say the modal value is 2 because it occurs 4 times which is more frequent than any other value.

It also makes sense to report modes for nominal data. With the above example of sex, we can say the modal value is female (but we'd have to report the statistic, so something like, "Of the 100 participants, 60 were female" is probably enough.

The choice of what statistic to use depends upon your research question. If you're trying to find out the likely disposable income of users, the median will be best. If you're answering a question about tax income on a national level, then the mean is best.

There are other measures of central tendency (harmonic mean, geometric mean and so on) but they are rarely used in UX.

In the next article, I'll talke about measures of variance.

17 January 2012

50ms to rate a webpage!

This article was first published on 18 January 2006.

50ms to rate a webpage!


A recent study shows that decisions about webpages may be made in the first 50 milliseconds of viewing. This has implications for website designers. In this article, we investigate this claim.

It’s been widely reported in the media that websites are evaluated in 50 milliseconds - this is all the time that designers have to make a good impression. This article will discuss these findings i) in terms of the quality of the research itself; and ii) if the research is valid, what are the implications for designers?

The paper is by Lindgaard et al (2006), but instead of believing everything I read in secondary sources, I decided to take a look at the actual paper itself. After all, there’s nothing like going to the primary source!

Human visual processing needs a certain amount of time to recognise objects (in the order of a few hundred milliseconds) but emotional judgements are made far more quickly, the authors contend. Further, later decisions about an object are not so much based on rational thought, but instead follow the principles of the cognitive confirmation bias (cognitive constancy) in which aspects congruent to the initial decision are focused upon and used to justify decisions. This means that the very basic elements of layout and design are important in the judgement of a websites quality which reduces the importance of other factors such as the content and facilities.

This study is important because, if valid, it indicates that the aims of usability (to ensure that websites meet the “holy trinity” of being effective, efficient, and satisfying to the user [ref ISO9241]) may be less important than a gut reaction as to its “coolness". The very idea that cool sites might be better than more usable but drab ones is central to the work of usability practitioners. After all, no amount of painstaking work will compensate for having a site that just looks drab (and if so, should Jakob Nielsen beware?)

Analysis

My reservations about the paper comes from 3 different areas.

Measurement scale

The first is the measurement of the 50ms condition in the third experiment. The other two experiments examined the stimuli with an exposure interval of 500ms. The third experiment correlated responses between a 50ms and a 500ms condition while introducing a new measurement scale. Earlier measurements were made using a computer presented line with the only prompts being “very unattractive” and “very attractive", each at a different end of the line. The third experiment introduced a 9 point scale. Although there is nothing strictly wrong with this, I would prefer to introduce only one new element to a new experiment. Introducing more than one leaves the risk that there might be a complex interaction between the new elements whereas if all but one element is the same, you can be sure why any difference occurred.

New design

The third experiment introduced a between-subjects design: participants saw web pages presented only for 50 ms or 500 ms. I felt that a within-subjects design would have offered more power to investigate the research question because it would have allowed comparison of the same page by the same person under different conditions.

A respectable interval between the two testing phases would have reduced the probability of demand characteristics (i.e., participants “remembering” their scores from a previous exposure). If the interval was, say, a week, then I would feel more confident of assertions about intra-rater reliability.

Can measurements be made in less than 500ms?

The analysis for this involved collapsing attractiveness responses for each participant across all webpages and then correlating them. I don’t feel too confident about this because collapsing scores (to me at least) should be done very carefully. There is a danger that a lot of variance is removed from the analysis making a Type I error more likely.

If scores are collapsed across pages, it is quite reasonable to expect the scores to lie close to the median or grand mean of all scores. A correlation might be meaningless in this case. The alpha of interrater reliability is not reported. If a within-subjects design had been used, reliability could have been tested with an intraclass correlation which would be more meaningful. Unfortunately, I think that the association between the 500ms and 50ms scores is less strong than reported.

True generalisation?

The authors report work by Zajonc which claims that judgements made at 500ms are reliable indicators of longer-term judgements. I would prefer to see specific evidence of the halo effect on longer term judgements when webpages are concerned. It would be interesting to see what scores participants would have made with a long term exposure. While only anecdotal, I have encountered websites that made me cringe when I first went there, but the content was rich enough for me to override an effect of confirmation bias.

Conclusion

I really liked the first two experiments and consider them valuable additions to the field. It is certainly important to realise that judgements about a website may often be made on the tiniest of exposures. However, I have concerns about the design of the experiment that lead me to reject the claim that designers have to build sites that impress in 50ms exposure. These concerns would be easily wrapped up in a couple of experiments: use a within-subjects design and test intra-rater reliability with an intraclass correlation coefficient; test the relationship between short term (ie, 500ms or less) exposures and longer term judgements made after at least some degree of interaction with the site; and keep the measurement scales constant between experiments.

On the whole though, a good read!

References

Lindgaard G, Fernandes G, Dudek C, Brown J (2006) Attention web designers: You have 50 milliseconds to make a good first impression! Behaviour and Information Technology, Vol 25, No. 2. 115-126.

Hate dialogs, love user interaction?

This article was first published on 1 November 2005. This type of interaction crystallised thoughts I'd had for a long time. It is now commonplace though the first were introduced long before this article so I cannot take any credit for them.

Hate dialogs, love user interaction?




“What’s that? What do I do here?”


If you’re like me, you probably dislike dialog boxes as a necessary evil. I know that they are ubiquitous and appear in almost every application, but still I find myself considering new ways to implement what they do. Here, I propose a different way to implement dialogs for applications that doesn’t use awkward boxes.

Let’s start with a definition of what a dialog box is. A dialog box is a GUI element separate from the main window of an application that provides information to the user, and may (but not always) require information in turn. The need behind them is important: programs will
“I have to go back - can I keep the dialog on screen to remind me what I’m looking for?”
occasionally need to interact with a user before an action can be continued, for example, changing the font of a word in a word processor. However, these bits of information are not required often and presenting the communicative components (the bits that actually interact with the user) on-screen all the time wastes screen space.

“Yuck! - Dialog boxes!”

I dislike dialogs for two main reasons.


  1. Dialogs interrupt work. When a dialog appears, the user will pause to try and understand what it wants and what they have to do to get rid of it and complete the task they wanted. This means taking time off from their task. A complex dialog can severely affect whether the task is accurately recalled. I have observed and experienced myself how I can forget what I was doing because I was concentrated upon a dialog too much. Reminders such as dialog box titles can help, but they only inform the user as to what their subtask it and not what their main task was. But sometimes programs require lots of information, and that is unavoidable. This is where the demands of an interface should be kept as low as possible; to prevent an interruption that forces recall from long term memory. The designers task is to ensure that the interface presents as little challenge to the user as possible. Try this dialog to see how some designers feel that
  2. They are not always but commonly modal - the window from which they came cannot be interacted with. This is a problem if the dialog needs information that can only be gotten from the main window but isn’t visible. In that case, the user needs to consider the information need, memorise it, cancel the dialog, find the information, memorise it, remember the way the dialog was created in the first place, do that action to produce it, remember where the necessary information was supposed to go, remember the required information, and put it where it was supposed to go.
Though this is a long description of what happens, it seems onerous to do this. Further problems arise if the dialog box is overly complicated and more information is required. Then the user needs to remember what is now needed, cancel the dialog, find the information, re-create the dialog, find where the new information is supposed to go, input it, and then input all the information that was required before. This begins to put an unreasonable strain on the user when the user shouldn’t need to remember all this information. Computers are good at recalling information and we should use them for that.
Some dialogs are not modal and allow interaction with the main window. However, modal dialogs are sometimes necessary, but I feel they are used far more often than they should be.

The second point is difficult to deal with - when a program requires infrequently used information, the associated interface will probably be rarely encountered. The first point, however, can be dealt with.

Integrated dialogs

Here is a mock-up of an interface that I’ve worked on. This interface uses the typical modal dialog box and is of a commonly encountered type for a rich text editor. The dialog is a search function for finding text.



As you can see, this is nothing new, but the dialog obscures the content. User can move it of course, but this requires user intervention that is not task related; other things will be obscured too.

The next interface though is of what I call an “integrated dialog". Instead of using a separate dialog box, the interface is embedded into the main window of the program. What happens is that when the program needs extra information, the main interface moves down (gently scrolls, but not too slowly) to reveal the information needed.



Screen space for the main windows contents is reduced when the dialog is on-screen, but this is only for the duration of the dialogs existence. Once the user has provided all the necessary information, the dialog appears and the screen returns to normal operation. This above dialog mock-up is simplified, but the space taken can be increased as needed.

Benefits

What benefits are there of using this type of interface?


  1. The main window is still open manipulation. If the user needs to retrieve some information from the main window to complete the dialog, they can just interact freely with the window.
  2. The dialog can be completed piecemeal instead of having to do it in one big chunk. This lowers the memory demands of more complex dialogs.
  3. The dialog remains on screen when the user is retrieving this information. I have sometimes seen users re-create dialogs just to refresh themselves of the information needed to complete the dialog.
  4. Coloured guides can (if relevant) point to and remind the user of what part of their content is being manipulated. For file dialogs, they will not be needed, but for changing the font or style of a piece of text, this information can remind the user of the content which might provide cues as to the context of their task. This will help the user recapture their task if it is forgotten.
  5. Only a relatively small amount of screen space is used, possibly less intrusively than the normal dialog because it uses the width of the application.

There are situations when such a dialog should not be used. The most obvious is when the program demands that the user not change or view the contents of a screen; the second is when the dialog requires a large amount of information from the user and too much screen space would be taken. For the second one, it may be better to break the dialog into separate pieces or find some other way of getting the information from the user.

Conclusion

Computer programs can perform complex tasks, but these in turn have complex information requirements from users. Interfaces that elicit this information from users may have to be complex, but reducing the cognitive load to the user can often be beneficial to the user completing their tasks.



Web usability - non relevant links

This article was first published on 5 August 2005.

Web usability - non relevant links


“The road to hell is paved with good intentions", or so it is said. Perhaps the saying should be,"dissemble at thy risk, for blindness is not contagious".

Okay, that’s the arty-nonsense out of the way, but the thesis of this article does strike me as one that is becoming increasingly relevant. When I use search engines, I expect to get a list of documents that may (or may not) answer my concern. I’m experienced enough at using search engines to know that I will likely encounter pages that I don’t want or need, but that’s okay as long as I can access pages that I do need.

The problem is though that finding information that I need can sometimes be harder than it should be because lots of folks out there want to help me. Or rather because they think they have a great business plan that just stops me from doing what I need to do.

Try this: search for a shop that sells computers in Cardiff. I’m interested in what search terms people use with which search engines. Feel free to leave a comment if you want describing what you did and what you came up with.

I’ve tried the same thing myself. By coincidence, I live near City Road in Cardiff which has more computer shops than just about any other road in the city. They are all within easy walking distance of me, though not too close so it can be a hassle for me to go out and wear out my shoe leather.

So instead, if I want some computer hardware, I turn to a search engine and enter “computer shop cardiff". The problem I have is with is the responses that Google responds with (as I said, I’m interested in what search terms other people would use using other search engines, so enter a comment if you want).

What I get are pages and pages of directories, often designed to offer access to “local” facilities, shops, businesses, etc. These are not much use to me: after all, consider the logic. I go to a database of possible websites, enter a search phrase and receive back… pages from other databases.

These directories wouldn’t be such a loss if they actually had links to the websites of the shops, but they rarely do - most often an address (which I know) or a telephone number (not much good on a Sunday when I usually browse for this kind of stuff), but not a website address.

This is not good - I know for a fact that these shops have their own websites. I’ve been to them before, but I never bookmarked them anywhere. While I can locate just about anything I want for my academic work, I cannot find the website of a shop just around the corner!

Craig’s List is a bit of a commercial success story and best of luck to them. The problem as I see it is that many are trying to copy the idea to see if the Craig's List implementation could be improved. If you do have a “local"list of businesses of your own, please remember that not everybody is after just addresses and telephone numbers. Try to offer value , maybe like reviews of local businesses or facilities. If the public won’t do them for you, go out and try yourself - after all, the businesses are local aren’t they? Even just a telephone call with a slightly awkwards question can give you some insight that you can pass on to your hopeful readers. The important thing is that you must offer something useful to people. Don’t try to hold on to customers by having them click through pages only to encounter the same useless information time after time. The best websites are those that offer something of value to their customers. If your content is good enough, they will come back and keep coming back to you. If it isn’t, they will all (eventually) just walk away.

Just don’t try to be the next “Yellow Pages". It’s already been done and there are already too many people trying to catch the boat that has already sailed.

Designing for Amnesia

This article was first published on 5 August 2005.

Designing for Amnesia


Here’s an idea: software’s usability is measured with reference to its effectiveness, efficiency, and user satisfaction (ISO 9241). Simple and straightforward stuff that every usability person should know and much can be inferred from running even just a few observational studies - sit the users down in front of a computer, tell them to do a particular task, and watch them make mistakes, maybe get them to complete a questionnaire with a Likert-type scale. As an aside, I’ve always been wary when no mistakes are made by the users. It either means that the users are too familiar with the software, or that the task wasn’t hard enough. Don’t forget that there needs to be a bit of a challenge in these tasks!

But this makes me think of problems with observational data. Yes it is argued that they possess a high degree of ecological validity (in other words, they are quite realistic tasks), but they are not real-life. Because putting camera’s in work places unknown to the staff and observing them that way is probably illegal - don’t do it! - and hard to control (just when you want them to perform a task, say editing a document, they may go off and have a cup of coffee or a natter.

Information Overload

But a recent article showed that many workers are constantly interrupted from their work with emails, ‘phone calls, meetings, and who knows what other demands leading to cognitive overload (called “information overload” originally by Jan Noyes). And of course, if somebody is in the middle of doing something, such distractions can cause delays as the person tries to remember a) what they were doing, and b) how far they had got before they were interrupted. Sometimes, important knowledge stored in short term memory may have been forgotten completely. This is not good for effectiveness, efficiency, etc.

In a worst case scenario, the user will completely forget their task and have to begin again. At best, the user will have enough information available to continue after giving themselves a short recap of what they were doing. Both cause a slight delay though the latter is acceptable and the former intolerable.

If workers are interrupted so much these days, should designers change tack? Instead of assuming that users have a nice task to sit down and do until completion, should we assume that at any point they might be fatally interrupted? (by “fatal", I mean fatal to the task, not the user!)

Considering users to be amnesic would help us to model the effect of interruptions upon task completion and therefore the integrity of the interface.

I’m thinking that maybe we should consider users to be mildly amnesic rather than fully switched on professionals dedicating their lives to the task and nothing else. Considering users to be amnesic would help us to model the effect of interruptions upon task completion and therefore the integrity of the interface.

So what is amnesia?

Amnesia is a condition where people have problems using their long term memory. Information they learned when amnesic is not available. It may be due to reasons of encoding (in which case the information was never there to be remembered) or decoding (the information could be remembered, but cannot be accessed). Personally, I think that decoding is most commonly the responsible bit - people with amnesia (complete long term memory loss after a traumatic event) sometimes can learn new things but not in the same way as non-amnesics (that’s you and me). The information appears to be procedural rather than episodic, takes a very long time to learn, and doesn’t appear to be available to conscious access.

Designing for Amnesia

But consider designing an interface. A person with amnesia will be told of a task and will begin to execute it after an appropriate amount of planning. However, there is a good chance that when performing an intricate subtask, the main task will be forgotten. I’ve been in this situation myself from time to time and it’s annoying ("What on earth was I trying to do?"). If we design for amnesia, we can offer enough information within the interface to remind the user of their task. Reminding them of the context is a different matter though and is probably not possible. Consider: if my task is to write a letter to my bank manager, I begin by finding out his or her address and then typing this information in (at least, that is the way I approach it). If I was amnesic, I might forget my main task just after I had completed my subtask. I sit there trying to remember what I was doing, but the interface only tells me that I was writing a letter to my bank manager and nothing else. Without explicitly informing the machine of my task (itself an interruption), the system can offer little clue as to the task context.

Systems already provide some information: for example,, dialog boxes have titles that inform a user of the task they have selected, but can be useful to remind them of what they were doing when they return to a task.

If we design for amnesia, we can offer enough information within the interface to remind the user of their task.

However, the benefit of such a system is that it offers room for the user to open a dialogue with the machine (note spelling: I don’t mean “dialog box", but “dialogue” as in discourse) to help them perform their task. The real benefit is that with contextual information, a user could have all the interruptions in the world (even a years holiday), come back and recover their position without too much trouble. Even somebody else could sit down at the machine with no knowledge of the previous user or the system and could yet understand what was meant to be done. In terms of collaborative working, this system would be very useful indeed.

I see the future of systems as being able to provide a rich enough level of detail for any user to be able to infer the original context of the task. However, this information is placed firmly at the subtask level. You might remember that you were saving a document called “letter to bank manager", but does that remind you exactly of what you meant to say? It can help, but it’s no guarantee.

And how to do this

The problem is that many tasks have a large history which cannot be readily put into a nice and neat summary. The information required to accurately infer the original task’s context may differ from person to person, but is there is likely to be a core of information that will suffice for the purpose.

I would guess that the history of tasks by the user would be the most ready way of providing contextual information. Problems will arise when the person multi-tasks as different subtasks may not be related to each other and is hence misleading information. However, the history of actions may be a useful way of doing this. I think I will do an experiment to see what a person can infer from a web browsers history list and see if it’s possible to infer a real-life task from historical information.

Initial results will be published here. Assuming of course that I cannot find any other studies on this topic! Leave a comment or email if you can find anything.

Enterprise UI Design Patterns - Part III


Enterprise UI Design Patterns - Part III

Summary

After Enterprise UI Design Patterns parts I and II, this final part is where I talk about getting the pattern library working within the enterprise.

Enterprise UI Design Patterns - Part I
Enterprise UI Design Patterns - Part II

Validation

Once we’d documented a few patterns, we went back to stakeholders to get them validated against stakeholder needs. They were considered in detail and suggested amendments were incorporated into the existing patterns. Once we felt confident that we were on the right track, we began documenting the full range of patterns in all detail.

Acceptance

One of the most important stages, we had to get the patterns accepted by the primary target audience: the producers. The patterns had to complement their work as much as possible for them to be useful.
One of the stakeholders was a representative of the producers who ensured that producers’ needs were communicated during the development of the pattern library.

The primary delivery channel was to have the patterns as Word documents. This was not ideal but helped us to put the documents up quickly in order to get feedback on them.

We planned a more interactive delivery system. Within this system, producers could search against tags / keywords; or they could begin by viewing macro-patterns. This allowed them to drill down into the patterns themselves.

Within each pattern, they could view the range of associated wireframes and click on components / widgets to see details. So if a producer, for example, was viewing a sign-in pattern, they could examine each constituent component like the grey prompt text within the text box. This component’s details would tell them that the text could be changed to fit the client’s style of content and other such things.

The advantage was that when producers were designing, they could make sure that they didn’t miss out any configurable items.

Findability

This was crucial to uptake: Producers had to be able to find a suitable pattern once they had framed a problem. Research, however, showed that different producers framed problems in subtly different ways using different terms. Sign in for one producer was log in for another and authenticate for a third.
Each pattern contained a number of tags to deal with synonymy.

Change control

We were also aware that patterns could be improved even if the first draft was based on current practice. This meant that we needed a change control procedure to understand the impact on clients’ sites if changes were brought in.

Any provisions will depend largely upon the organisation that uses the library but our solution was to hold regular meetings to discuss any proposed changes. Such initiatives could come from a variety of sources: from producers, from UX people, developers, business or clients.

To aid this, we proposed a librarian role where someone acted as a gatekeeper to collate and present information to this committee. This librarian acted as a first port-of-call for library change requests and could begin a preliminary assessment before presenting them to the change control committee.

We also realised that there might be emergencies when we couldn’t wait for these meetings so the makeup of an emergency committee was proposed. Members (or representatives) taken from the stakeholders were to be available at very short notice to discuss changes.

Greatest Effort

Even though a great amount of effort was made to produce the library, it was aimed to work within a Lean development environment. This seems paradoxical, that so much work goes into something like this but we had to provide a fertile enough environment for producers to be able to work. Because design needs to be one step ahead of development in Agile processes, having a large set of documentation ready would be invaluable.

The cost of time, however, needed to create such a library from scratch is probably only possible for a larger organisation with many clients. This is where the benefits of a pre-made library can most be felt. Within smaller organisations with fewer clients, the initial investment may be hard to recoup unless spectacular growth is anticipated.

The greatest effort I made was in getting the content of the library correct. This involved scouring existing clients’ sites to elicit designs and break them down into their fundamental interactions. This level of abstraction needs to be undertaken by an experienced user experience practitioner, or someone who readily understands what degree of information is necessary to use the pattern.

A surprising amount of time was spend thinking through how to get the project off the ground but this article should help avoid some of the pitfalls.


Go back to Enterprise UI Design Patterns - Part II
or go back to Enterprise UI Design Patterns - Part I

Enterprise UI Design Patterns - Part II


Enterprise UI Design Patterns - Part II

Summary

Following on from Enterprise UI patterns - Part I, this second part of three is where I talk about the content of a pattern.
Enterprise UI Design Patterns - Part I
Enterprise UI Design Patterns - Part III

What content goes in a pattern?

The fundamental contents of each pattern were based around the producers’ primary needs for design and development. These were:

Item of content
Rationale
A descriptive title
Immediate information about the pattern; reference.
A unique code to identify each pattern
Uniquely identify a particular pattern, version, wireframe, and component
A description
More information about the pattern than just its name
Tags / keywords
Findability
A list of user problems that the pattern should help solve
Place the pattern into its context as it relates to users
Acceptance criteria
Used by producers and user acceptance testers to validate a project
Workflow
Describe the user flow and journey through the pattern at a high level
Wireframes and specification (including configurable items)
Detailed visual information about the pattern (often done in Balsamiq)
Examples of use in existing websites
Screenshots of the pattern in use

The configurable items were referenced (each with a unique code) in the wireframe. This meant that producers had an immediate coverage of what things could be configured which made the design of common components more straightforward. Examples of the pattern in use helped to illustrate the pattern. Producers rarely needed this information and it was included mostly for staff outside of the unit and for clients.

A miscellaneous section was created with contents guided by stakeholders’ needs.

Item of content
Rationale
Where the pattern is currently used (generally within the company’s stable but sometimes we referred to external examples)
Understand the impact across all clients’ websites if changes are made
User states and associated information
Understand complexity
General UX principles underpinning the design decisions
Useful for re-design
Alternative patterns (other ways to solve the same user problems)
Allows producers and clients to see different ways to solve the same user problem and get work / time estimates
Related patterns (generally those within the same macro-pattern)
A family that contribute towards a higher-level user goal
Testing and research related to the design decisions
Provides justification about design decisions
How not to do it (anti-patterns)
Informs design decisions
SEO recommendations
Provide good SEO
Estimates of work and time for implementing a pattern
For planning / estimating the resources needed for a project
Version history
To track down particular versions

Incorporating user stories as acceptance criteria for user acceptance testing

The organisation used an Agile method of production. When a project went into user acceptance testing, a number of acceptance criteria were generated from scratch and the project tested against them. These criteria often came from user stories.

·      Pattern definitions incorporated these criteria which provided benefits:
  • ·        User acceptance testers had ready made criteria - these were already embedded into the pattern
  • ·        Each pattern would be working having already been validated
  • ·        UAT could be automated to a greater degree
Go on to Enterprise UI Design Patterns - Part III
or go back to Enterprise UI Design Patterns - Part I