23 August 2013

Crowd-sourcing research

One idea I had a few years ago was to use the various crowd-sourcing websites as a source of willing and cheaply-paid participants for UX research.

Wait, you fool! You cannot do an hour-long usability session like that!

Well, the keyword is reductionism. UX research is discovering brain and behaviour. It's psychology. And a few psychologists have already been using crowd-source sites as sources of participants. [1, 2]

The overall results are quite promising: It is possible to undertake such experiments with a crowd-sourced experiment sample. There are, however, precautions to be taken, and it's only fair to pay participants a decent amount if only to ensure a low drop-out rate and faster participation.

So it's not cheaper but it is faster. One study [2] said, "Performing a full-sized replication of the Nosofsky et al. [40] data set in under 96 hours is revolutionary." For UX research, it shows a wonderful promise for particular questions as long as the experiment is designed well.

But I also want to make sure that I'm dealing with an ethical company. In my new role as a research lead for Vodafone, there is a reputation risk to the company. This means that I've been participating in some of these sites as a worker to check out conditions from within. A lot depends upon the micro-tasks conditions; but company also have their own attitudes to workers which was taken into account. We will not work with companies that argue about or unnecessarily delay payments to workers; or use petty reasoning to 'trick' workers out of their money.

To guard against reputational risk, we will engage only those companies that treat workers with at least some respect.

I don't expect this post to make any waves, at least not amongst the crowd-sourcing sites, because our work is fairly small potatoes for them. But we are a large company with expanding requirements, and it's always good to remember that those on the bottom can also, with a slight change in context, become the one who pays the piper.

References

[1]   Paolacci, Chandler, Ipeirotis (2010) Running experiments on Amazon Mechanical Turk. Judgement and Decision Making, Vol. 5, no. 5.

[2]   Crump MJC, McDonnell JV, Gureckis TM (2013) Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS ONE 8(3): e57410. doi:10.1371/journal.pone.0057410

No comments: