23 August 2013

Crowd-sourcing research

One idea I had a few years ago was to use the various crowd-sourcing websites as a source of willing and cheaply-paid participants for UX research.

Wait, you fool! You cannot do an hour-long usability session like that!

Well, the keyword is reductionism. UX research is discovering brain and behaviour. It's psychology. And a few psychologists have already been using crowd-source sites as sources of participants. [1, 2]

The overall results are quite promising: It is possible to undertake such experiments with a crowd-sourced experiment sample. There are, however, precautions to be taken, and it's only fair to pay participants a decent amount if only to ensure a low drop-out rate and faster participation.

So it's not cheaper but it is faster. One study [2] said, "Performing a full-sized replication of the Nosofsky et al. [40] data set in under 96 hours is revolutionary." For UX research, it shows a wonderful promise for particular questions as long as the experiment is designed well.

But I also want to make sure that I'm dealing with an ethical company. In my new role as a research lead for Vodafone, there is a reputation risk to the company. This means that I've been participating in some of these sites as a worker to check out conditions from within. A lot depends upon the micro-tasks conditions; but company also have their own attitudes to workers which was taken into account. We will not work with companies that argue about or unnecessarily delay payments to workers; or use petty reasoning to 'trick' workers out of their money.

To guard against reputational risk, we will engage only those companies that treat workers with at least some respect.

I don't expect this post to make any waves, at least not amongst the crowd-sourcing sites, because our work is fairly small potatoes for them. But we are a large company with expanding requirements, and it's always good to remember that those on the bottom can also, with a slight change in context, become the one who pays the piper.

References

[1]   Paolacci, Chandler, Ipeirotis (2010) Running experiments on Amazon Mechanical Turk. Judgement and Decision Making, Vol. 5, no. 5.

[2]   Crump MJC, McDonnell JV, Gureckis TM (2013) Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS ONE 8(3): e57410. doi:10.1371/journal.pone.0057410

09 July 2013

Will UX exhaust itself?

When I first started in the field (originally doing usability along with some design), it was easy to make an impact. Just think of all those nasty 1990s websites with fundamental flaws that could remedied with a wave of a good developers hand? It was like that.

But now, the whole world is getting on-board with user experience and the upshot is that everyone is a little more savvy than they used to be. This in turn makes small-change-big-impact gigs far less frequent. UX is focusing increasingly upon minutia because that's where the benefits will be found.

But eventually, the law of diminishing returns will kick in and we will come to the point where UX talent will be employed for the quick wins and little else. Or perhaps even not at all.

Another possibility is that UX practitioners will increasingly focus on niche areas. "Hey!", they'll say. "You need someone to optimise a revenue stream from selling vintage astronomy books? Well, I've done vintage chemistry books which is much the... Oh, okay. So you think I haven't got the skills..."

I wonder if this might lead not so much to a contraction in the market, but rather a slow down? I can envisage companies saying, "Well, these are all UX problems but they're pretty much solved so let's hold off on that freelancer."

19 May 2013

Excel for serious data analysis?!



Let's all laugh at Excel - sure, I do. When doing data analysis, it's
good for data entry but I'd hate to rely on it for anything serious.

But a while ago, I came across a good use case for it. I'm sure R
could do the same thing fairly well if needed, but here's a nice quick
and dirty method of exploring data sets with lots of variables.

What's the problem with lots of variables? Data analysts these days
are suppose to start drooling when they get more data but the reality
is that this makes things harder to see what's going on. The chances
are that the noise is outgrowing the data. Yeah, we can all be pompous
and promote ourselves by saying, "Hey, dirty data is what I live for -
so just get out of my way, I'm coming through" and other such macho
nonsense.

If we're really analysing data, at some point we want answers. So how
can we decrease the noise?

For an example, I'm going to look at an available data set from
SEOMoz. What they did was monitor a number of websites and their
Google rankings in response to various queries.

But to me in the task of trying to understand what happened here, it's
confusing. Models with this number of variables are probably going to
fail because the ability to discriminate a variable's effect will
often lower with the addition of a influencing variable - even if that
influencing variable is orthogonal to what we're measuring.

So my first job was to more clearly define my problem space by
reducing noise. I did this by correlating each variable with eath
other variable. This formed a nice square matrix of correlations, with
a diagonal consisting of exactly 1.0 (each variable correlated with
itself). By the way, I'm not looking at significance here so alpha
inflation is not an issue.

But this was still hard to really visualise. Visualising is a critical early step in almost any analysis process. It helps me develop a mental model of the data I'm going to be working with.

But here, Excel and conditional formulae came to the rescue. I figured
that if I could colour a cell according to the strength of a
correlation, I might be able to get a high level view of the data
which let's me focus on bits of interest.

So I copied out a conditional formulae which is probably useful for
any correlation matrix.

Then by making cells tiny, I could get a nice visualisation that
allowed me to identify groups of variables that appeared to co-vary
strongly (the darker bits)

I could look at these in more detail and decide whether or not to
delete them or keep them and simplify the data set somewhat. It was
helpful to go to others with a list of variables to ignore and tell
them to focus on replacements instead.

Of course, this is only a very small part of the story, but as a few
first steps, it was useful to reduce the noise.

24 February 2013

Perfect job description and ugly companies

Picture this: You're searching for the perfect employee and your job description rocks. Everything is perfect -- or is it? Is your JD scoring the perfect candidate? Or have you scored an own-goal?

As a freelancer, I spend a lot of time browsing job adverts. Many are fairly anonymous, standard-text type affairs, but a bit of close reading can usually get some useful information even from these. But UX ads sometimes veer to the silly too.

One I read recently (I don't feel mean enough to link to it) did a superb job of 'selling' the company. If I believed what they wrote, I'd consider them to be a forward-thinking team of the best individuals on the planet.

Of course, we all know it's nonsense which makes me wonder why we persist with this type of strategy, but something else was nagging me. Then, it hit me. There was little about the job itself, what the successful candidate would be doing. Just many sentences of how they only every took the best.

And that company looked ugly to me

Seriously unattractive and slightly fake. Something like a salesman turning up to your birthday party and getting you to buy something from him. Or a insecure person desperately boasting about how great they were. Confidence is good but walking the talk talks louder than talking the walk.

Maybe I'm not good enough / not confident enough? Well, I get enough repeat work to know that I must be doing something useful for people. I have failures too, but these are learning processes we all encounter.

Is it because I'm scared of the challenge? 'Scared' isn't the right word. I've been in competitive fields for many years and I'm happy to compete and lose (and even win on rare occasions!) because it's a way to learn and be better. If I'm not periodically knocked back and picking myself up again, I'm not learning anything.

But this point is somewhat on the right track.

I wouldn't ever apply because the content felt like the company were putting up a huge wall between me and them.

The best challenges I've ever had are those that made me want to  try because I felt I would learn something even if I failed.

But this job description made me feel like I should consider myself honoured to read the advert, never mind apply. A speck of worthless, unproductive dirt like me could, at best, be granted the privilege of prostrating my pointless form at the feet of the almighty Company.

Well, I am being a bit harsh, but the job ad's hyperbole did make me feel like that.

And that's why I wouldn't apply. I'm happy to face challenges, but I want to get something worthwhile out of them: to learn something new for myself. I just don't find being invited to be a a faceless member of a faceless self-proclaimed elite to be attractive without knowing why they're elite. Just saying it is not enough.

So my advice if you're writing a job description is this: Making your company attractive isn't about selling it's corporate beliefs: it's about saying, or even better, showing, that you do cool stuff and anyone is welcome to come in and try and chat about work. Much like how my most enjoyable interviews as an employee were where I forgot myself just talked about my work like an enthusiastic child. The worst were where I felt I was being examined by a higher power.

Feel free to disagree. But as someone who has run a business, if I found that you wrote a job description that actually turned people off from applying, then I'd suggest that maybe you need some additional training.

29 January 2013

SEO: Scraping synonyms from Wikipedia

Here's a Python script for scraping synonyms from Wikipedia. You provide the core keywords, and Python (plus the BeautifulSoup module) will get the synonyms.

The last article I wrote about getting SEO keywords from Wikipedia seemed interesting to people. The method was, however, manual, which takes time and effort to complete for more than a couple of keywords.

If you want to hurry things up or automate the process, below is a Python script which scrapes potential keywords from Wikipedia. If you’re going to be doing a lot of this, I’d be tempted to download a snapshot of the Wikipedia database and use that rather than downloading. This script is really intended for educational purposes or those who want to retrieve a very small list of synonyms.

The output is a CSV (comma separated file) that can be opened in a spreadsheet. I find LibreOffice much better at handling Unicode content from a CSV file than Excel and it’s a free download. Just don’t ask me how to import Unicode CSV with Excel!

Each series of synonyms requires a lot of cleaning but it’s easier than downloading it all yourself.

"""
Scrape synonyms from Wikipedia
"""

import urllib2
import BeautifulSoup as BS

# phrases go in here as a list of strings
# e.g., names = ['United_kingdom','United_States','Philippines'] looks for synonyms for the UK, US and the Philippines

names = ['United_kingdom','United_States','Philippines'] 

URL = "http://en.wikipedia.org/w/index.php?title=Special:WhatLinksHere/%s&hidetrans=1&hidelinks=1&limit=500"

fout = open('synonyms_test.csv','w')
for name in names:
    active_URL = URL%name
    print active_URL
    req = urllib2.Request(active_URL, headers={'User-Agent' : "Magic Browser"})
    data = urllib2.urlopen(req)
    stuffs = data.read()
    soup = BS.BeautifulSoup(stuffs)
    links_body = soup.find("ul", {"id" : "mw-whatlinkshere-list"})
    fout.write('%s, '%name)
    try:
        links = links_body.findAll('a')
        for link in links:
            if link.text != "links":
                fout.write('%s, '%link.text.encode('utf-8'))
        fout.write('\n')
    except AttributeError: # needed in case nothing is returned
        pass
fout.close()

A couple of things: this code needs BeautifulSoup installed. See their install notes on how to do this. What this module does is parse the Wikipedia page. The script then iterates through the names you’ve provided with scrapes a page for the first 500 links that are not transduction pages or just plain links. This reduces the problem space down to mostly synonymous re-directs which is what we want.

To run this script, you need to see line 11 and insert the phrases you want to retrieve synonyms for. Don’t leave spaces: replace them with underscores. The script also doesn’t check whether the phrase you’ve used is the canonical page either so that’s something you need to check for.

Once the script’s finished, load the CSV file into LibreOffice Calc (or some other form of spreadsheet that can load CSV files with Unicode), and delete anything that clearly isn’t a valid synonym for SEO purposes.

When that’s done, delete all the blanks and shift cells left (not up!), and you should have a spreadsheet full of nice synonyms that can enhance your SEO.

Happy scraping!

15 January 2013

Some SEO tips keywords

Here's a quick tip to get keywords to improve your search engine optimisation (SEO) using Wikipedia - for free! Enter your term into Wikipedia. If it's a brand name, enter the product type (e.g., "handbags').

Click on 'Toolbox' to the left and then 'What links here', and you'll be shown a new page that details all inbound links to that page within Wikipedia.

Then, under 'Filters', 'hide' both transclusions and links so that only re-directs to the page are shown.

And hey presto! There's a nice list of synonymous terms with a variety of spellings.

For example, handbag comes up with:


Clutch (handbag) (redirect page) ‎ (links)
Manbag (redirect page) ‎ (links)
Handbags (redirect page) ‎ (links)
Man bag (redirect page) ‎ (links)
Man-bag (redirect page) ‎ (links)
Manpurse (redirect page) ‎ (links)
Hand bag (redirect page) ‎ (links)
Hand-bag (redirect page) ‎ (links)
Hand-bags (redirect page) ‎ (links)
Man purse (redirect page) ‎ (links)
👜 (redirect page) ‎ (links)
Evening bag (redirect page) ‎ (links)

whereas 'telescope' comes up with:

TeleScope (redirect page) ‎ (links)
Telescopes (redirect page) ‎ (links)
Perspicil (redirect page) ‎ (links)
Telescopy (redirect page) ‎ (links)
Astronomic telescope (redirect page) ‎ (links)
Telescopic observational astronomy (redirect page) ‎ (links)
Telescopically (redirect page) ‎ (links)
Astronomical telescope (redirect page) ‎ (links)
Ground telescope (redirect page) ‎ (links)

13 January 2013

Prolog and UX Planning

Summary: Prolog is a logical programming language that can help craft perfect sitemaps and workflows by ensuring solutions meet all business and technical constraints. Here, I'll chat a little about Prolog and how it might be used, with more detailed information coming in future.

Part of Thought Into Design's work is natural language interfaces. Among the many tools we use is a language called Prolog. This is a logic language, quite strong on declarative style. It works by defining facts and rules and then asking queries. In some ways, it's how I envisaged computer programming to be, back in the early 1980s, before I ever programmed anything.

Examples of facts are:


man(alan).
man(tony).
woman(jell).
woman(ann).

These say (in order) that the atom 'alan' is a 'man', as is 'tony', whereas 'jell' and 'ann' are both classed as woman.

Rules determine how atoms relate to each other. Using the above code, we could define some rules thus:


human(X) :- man(X).
human(X) :- woman(X).

Everything classed as both 'man' and 'woman' is now also classed as 'human'.

With these in place, we can issue queries that tell us if a particular result is logically possible or not.

        human(X).
And we get a print out of everyone who is human. This is a very basic example and seems similar to a query language, but Prolog's power is in being able to infer relationships from what it's been told.

Prolog could be quite useful when crafting sitemaps and doing workflows, more so for the larger and complex sites rather than simple ones. There are often times when several different business rules need to be accounted for and the more complex the rule-set, the harder it is for a designer to navigate through them.

Prolog, or other logic languages, might be a way to help determine if particular sitemaps and workflows are valid solutions to problems or not.

My ideas are quite unformed as yet and this is something I hope to return to soon so watch this space!

Twitter Bootstrap for Responsive UX Design

Summary: We redesigned a website to be responsive using Twitter Bootstrap and JQuery to create design documentation. Bootstrap proved to be an effective tool for conventional interactions but less use with more complex stuff.

One task we've done lately has been to redesign the Thought Into Design site. It's quite boring and uncommunicative and the analytics suggest that there engagement can and should be better.

The broad business requirements are:

  • Offer a client list
  • Explain the types of work we do
  • Show work samples
  • Improve the user journey to contact us

After some initial planning, we decided to try out Twitter Bootstrap and frankly, it was a nice experience. There is a short summary of using Bootstrap with real clients which is well worth a read.

What we found was that it is an incredibly quick way to code some static pages up (HTML / CSS / Javascript) and was quite an enjoyable way to code after years of trying to get DIVs to fall into the right place. In some ways, it reminds me a little of table-based coding (and yes, I'm old enough to remember when virtually all sites were done that way!) so I have reservations. Fundamentally, coding is done by defining rows and then the number of elements (from 12) in each row. You can see the redesign in progress here: see the redesign in progress. Plus, responsiveness is baked in.

But after over a decade of doing wireframes, responsive design quickly muddies the water and increases the workload significantly for designers. Instead of a single set of wireframes, we now need to produce them for web, tablets and phones of various shapes and sizes.

Curiously, I remember being pooh-poohed a few years back when I suggested coding alternative CSS sheets for small screen devices like the Asus EEE netbook and was told that only web would ever be needed now. This was just before smart phones...

As an agency, we're happy to do what ever work is necessary to achieve the client's business goals. But it's also inefficient. What about if we could just skip the wireframing process and move directly onto working prototypes as documentation? This is something I've done before, particularly with complex interactions that needed to be tested dynamically, but coding was always a slow process. Bootstrap and JQuery have made the coding process a doddle now. I can see a future for designer-orientated tools that  handle this with less code and more visual creation (I'm thinking if there's a product to be made here).

Major advantages are:

  • Passing off working code to developers that documents the workflows and interactions within itself. 
  • Reducing time spent documenting interactions statically when a working prototype will do it better
  • Communicates the interactions to the stakeholders better


But it's not all roses. Going direct to code, for me personally, hinders the creative process. I need to have some aim before I code must like if I'm writing a web application, I need to spend time planning long before I write a single line of code.

In addition, while it can be instructive to be shown capabilities beyond my reckoning, I also need to be able to think / ideate beyond the capabilities of software. One example is how to communicate complex information using dynamic graphs / charts, and Bootstrap won't handle those complex interactions. For the simple, bread-and-butter stuff, Bootstrap is a superb tool. I will still run to my sketchpad as a first option,

Disadvantages are:

  • Might hinder the creative process
  • Doesn't help initial ideation
  • The working code might not be up to par
  • Deals poorly with widgets outside of its reckoning
  • Reinforces the idea that UX is about code
In Conclusion, Twitter Bootstrap is a very useful tool particularly for standard, conventional interactions. For example, planning forms with normal widgets. The resulting code is superb documentation for stakeholders, users (testing / research) and developers / testers. Less orthodox interactions, however, require a different framework for now.