13 February 2015

Fast prime numbers in Python

I spent some time recently on Project Euler and got side-tracked by the efficient calculation of prime numbers. After using a brute force method (iterating through a range of numbers and trying to find their factors), I read around and found a nice page at http://rebrained.com/?p=458, I found a good Stack Overflow page at http://stackoverflow.com/questions/2068372/fastest-way-to-list-all-primes-below-n. These inspired me to try again and I came up with the following routine. It's faster than most but not as fast as primes6 on the first page I linked to when generating more than approximately prime numbers up to 350,000-400,000. Below that, nothing seems to touch it.

It's also different. It uses numpy (which is cheating, in a way) but does the job well. I like it because it seems more understandable once you've grasped that the routine doesn't do any division. Instead, it's pure sieve operations on a vector of booleans. Anything found to be divisible by anything other than 1 and itself is marked as False, and the routine finishes by returning the indices of True values - which are primes.

I've triangulated the results by summing them and comparing the sums against those of other routines and there's no differences I've noticed yet.

import numpy as np
from math import sqrt

def ajs_primes3a(upto):
  mat = np.ones((upto), dtype=bool) # set up a long boolean array
  mat[0] = False # remove 0
  mat[1] = False # remove 1
  mat[4::2] = False # remove anything divisible by 2
  for idx in range(3, int(sqrt(upto))+1, 2): # remove anything else divisible
      mat[idx*2::idx] = False 
  return np.where(mat == True)[0] # return the indices which are the primes

I'm quite pleased with this early foray into optimising a routine but there's work to do compared to prime6. What I like is that it has no division and instead seems to be a pure sieve and doesn't create a long list of numbers.

I tried other versions with a half-series so that anything divisible by 2 just wasn't considered, but what I came up with just weren't as fast.

Times (msecs, same machine, best of 3-6 multiple runs)

             10k      100k     500k     1m       20m
prime6       0.001258 0.002722 0.007229 0.001229 0.22388
erat         0.005414 0.059047 0.333737 0.673749 15+ seconds
ajs_primes3a 0.000360 0.001897 0.008540 0.016952 0.70135

Up to 100k, mine leads but prime6 takes over strongly after that. Mine doesn't lose too much ground, considering, so it's best to think of mine as fast-ish but nicely understandable. 

23 August 2013

Crowd-sourcing research

One idea I had a few years ago was to use the various crowd-sourcing websites as a source of willing and cheaply-paid participants for UX research.

Wait, you fool! You cannot do an hour-long usability session like that!

Well, the keyword is reductionism. UX research is discovering brain and behaviour. It's psychology. And a few psychologists have already been using crowd-source sites as sources of participants. [1, 2]

The overall results are quite promising: It is possible to undertake such experiments with a crowd-sourced experiment sample. There are, however, precautions to be taken, and it's only fair to pay participants a decent amount if only to ensure a low drop-out rate and faster participation.

So it's not cheaper but it is faster. One study [2] said, "Performing a full-sized replication of the Nosofsky et al. [40] data set in under 96 hours is revolutionary." For UX research, it shows a wonderful promise for particular questions as long as the experiment is designed well.

But I also want to make sure that I'm dealing with an ethical company. In my new role as a research lead for Vodafone, there is a reputation risk to the company. This means that I've been participating in some of these sites as a worker to check out conditions from within. A lot depends upon the micro-tasks conditions; but company also have their own attitudes to workers which was taken into account. We will not work with companies that argue about or unnecessarily delay payments to workers; or use petty reasoning to 'trick' workers out of their money.

To guard against reputational risk, we will engage only those companies that treat workers with at least some respect.

I don't expect this post to make any waves, at least not amongst the crowd-sourcing sites, because our work is fairly small potatoes for them. But we are a large company with expanding requirements, and it's always good to remember that those on the bottom can also, with a slight change in context, become the one who pays the piper.

References

[1]   Paolacci, Chandler, Ipeirotis (2010) Running experiments on Amazon Mechanical Turk. Judgement and Decision Making, Vol. 5, no. 5.

[2]   Crump MJC, McDonnell JV, Gureckis TM (2013) Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS ONE 8(3): e57410. doi:10.1371/journal.pone.0057410

09 July 2013

Will UX exhaust itself?

When I first started in the field (originally doing usability along with some design), it was easy to make an impact. Just think of all those nasty 1990s websites with fundamental flaws that could remedied with a wave of a good developers hand? It was like that.

But now, the whole world is getting on-board with user experience and the upshot is that everyone is a little more savvy than they used to be. This in turn makes small-change-big-impact gigs far less frequent. UX is focusing increasingly upon minutia because that's where the benefits will be found.

But eventually, the law of diminishing returns will kick in and we will come to the point where UX talent will be employed for the quick wins and little else. Or perhaps even not at all.

Another possibility is that UX practitioners will increasingly focus on niche areas. "Hey!", they'll say. "You need someone to optimise a revenue stream from selling vintage astronomy books? Well, I've done vintage chemistry books which is much the... Oh, okay. So you think I haven't got the skills..."

I wonder if this might lead not so much to a contraction in the market, but rather a slow down? I can envisage companies saying, "Well, these are all UX problems but they're pretty much solved so let's hold off on that freelancer."

19 May 2013

Excel for serious data analysis?!



Let's all laugh at Excel - sure, I do. When doing data analysis, it's
good for data entry but I'd hate to rely on it for anything serious.

But a while ago, I came across a good use case for it. I'm sure R
could do the same thing fairly well if needed, but here's a nice quick
and dirty method of exploring data sets with lots of variables.

What's the problem with lots of variables? Data analysts these days
are suppose to start drooling when they get more data but the reality
is that this makes things harder to see what's going on. The chances
are that the noise is outgrowing the data. Yeah, we can all be pompous
and promote ourselves by saying, "Hey, dirty data is what I live for -
so just get out of my way, I'm coming through" and other such macho
nonsense.

If we're really analysing data, at some point we want answers. So how
can we decrease the noise?

For an example, I'm going to look at an available data set from
SEOMoz. What they did was monitor a number of websites and their
Google rankings in response to various queries.

But to me in the task of trying to understand what happened here, it's
confusing. Models with this number of variables are probably going to
fail because the ability to discriminate a variable's effect will
often lower with the addition of a influencing variable - even if that
influencing variable is orthogonal to what we're measuring.

So my first job was to more clearly define my problem space by
reducing noise. I did this by correlating each variable with eath
other variable. This formed a nice square matrix of correlations, with
a diagonal consisting of exactly 1.0 (each variable correlated with
itself). By the way, I'm not looking at significance here so alpha
inflation is not an issue.

But this was still hard to really visualise. Visualising is a critical early step in almost any analysis process. It helps me develop a mental model of the data I'm going to be working with.

But here, Excel and conditional formulae came to the rescue. I figured
that if I could colour a cell according to the strength of a
correlation, I might be able to get a high level view of the data
which let's me focus on bits of interest.

So I copied out a conditional formulae which is probably useful for
any correlation matrix.

Then by making cells tiny, I could get a nice visualisation that
allowed me to identify groups of variables that appeared to co-vary
strongly (the darker bits)

I could look at these in more detail and decide whether or not to
delete them or keep them and simplify the data set somewhat. It was
helpful to go to others with a list of variables to ignore and tell
them to focus on replacements instead.

Of course, this is only a very small part of the story, but as a few
first steps, it was useful to reduce the noise.

24 February 2013

Perfect job description and ugly companies

Picture this: You're searching for the perfect employee and your job description rocks. Everything is perfect -- or is it? Is your JD scoring the perfect candidate? Or have you scored an own-goal?

As a freelancer, I spend a lot of time browsing job adverts. Many are fairly anonymous, standard-text type affairs, but a bit of close reading can usually get some useful information even from these. But UX ads sometimes veer to the silly too.

One I read recently (I don't feel mean enough to link to it) did a superb job of 'selling' the company. If I believed what they wrote, I'd consider them to be a forward-thinking team of the best individuals on the planet.

Of course, we all know it's nonsense which makes me wonder why we persist with this type of strategy, but something else was nagging me. Then, it hit me. There was little about the job itself, what the successful candidate would be doing. Just many sentences of how they only every took the best.

And that company looked ugly to me

Seriously unattractive and slightly fake. Something like a salesman turning up to your birthday party and getting you to buy something from him. Or a insecure person desperately boasting about how great they were. Confidence is good but walking the talk talks louder than talking the walk.

Maybe I'm not good enough / not confident enough? Well, I get enough repeat work to know that I must be doing something useful for people. I have failures too, but these are learning processes we all encounter.

Is it because I'm scared of the challenge? 'Scared' isn't the right word. I've been in competitive fields for many years and I'm happy to compete and lose (and even win on rare occasions!) because it's a way to learn and be better. If I'm not periodically knocked back and picking myself up again, I'm not learning anything.

But this point is somewhat on the right track.

I wouldn't ever apply because the content felt like the company were putting up a huge wall between me and them.

The best challenges I've ever had are those that made me want to  try because I felt I would learn something even if I failed.

But this job description made me feel like I should consider myself honoured to read the advert, never mind apply. A speck of worthless, unproductive dirt like me could, at best, be granted the privilege of prostrating my pointless form at the feet of the almighty Company.

Well, I am being a bit harsh, but the job ad's hyperbole did make me feel like that.

And that's why I wouldn't apply. I'm happy to face challenges, but I want to get something worthwhile out of them: to learn something new for myself. I just don't find being invited to be a a faceless member of a faceless self-proclaimed elite to be attractive without knowing why they're elite. Just saying it is not enough.

So my advice if you're writing a job description is this: Making your company attractive isn't about selling it's corporate beliefs: it's about saying, or even better, showing, that you do cool stuff and anyone is welcome to come in and try and chat about work. Much like how my most enjoyable interviews as an employee were where I forgot myself just talked about my work like an enthusiastic child. The worst were where I felt I was being examined by a higher power.

Feel free to disagree. But as someone who has run a business, if I found that you wrote a job description that actually turned people off from applying, then I'd suggest that maybe you need some additional training.

29 January 2013

SEO: Scraping synonyms from Wikipedia

Here's a Python script for scraping synonyms from Wikipedia. You provide the core keywords, and Python (plus the BeautifulSoup module) will get the synonyms.

The last article I wrote about getting SEO keywords from Wikipedia seemed interesting to people. The method was, however, manual, which takes time and effort to complete for more than a couple of keywords.

If you want to hurry things up or automate the process, below is a Python script which scrapes potential keywords from Wikipedia. If you’re going to be doing a lot of this, I’d be tempted to download a snapshot of the Wikipedia database and use that rather than downloading. This script is really intended for educational purposes or those who want to retrieve a very small list of synonyms.

The output is a CSV (comma separated file) that can be opened in a spreadsheet. I find LibreOffice much better at handling Unicode content from a CSV file than Excel and it’s a free download. Just don’t ask me how to import Unicode CSV with Excel!

Each series of synonyms requires a lot of cleaning but it’s easier than downloading it all yourself.

"""
Scrape synonyms from Wikipedia
"""

import urllib2
import BeautifulSoup as BS

# phrases go in here as a list of strings
# e.g., names = ['United_kingdom','United_States','Philippines'] looks for synonyms for the UK, US and the Philippines

names = ['United_kingdom','United_States','Philippines'] 

URL = "http://en.wikipedia.org/w/index.php?title=Special:WhatLinksHere/%s&hidetrans=1&hidelinks=1&limit=500"

fout = open('synonyms_test.csv','w')
for name in names:
    active_URL = URL%name
    print active_URL
    req = urllib2.Request(active_URL, headers={'User-Agent' : "Magic Browser"})
    data = urllib2.urlopen(req)
    stuffs = data.read()
    soup = BS.BeautifulSoup(stuffs)
    links_body = soup.find("ul", {"id" : "mw-whatlinkshere-list"})
    fout.write('%s, '%name)
    try:
        links = links_body.findAll('a')
        for link in links:
            if link.text != "links":
                fout.write('%s, '%link.text.encode('utf-8'))
        fout.write('\n')
    except AttributeError: # needed in case nothing is returned
        pass
fout.close()

A couple of things: this code needs BeautifulSoup installed. See their install notes on how to do this. What this module does is parse the Wikipedia page. The script then iterates through the names you’ve provided with scrapes a page for the first 500 links that are not transduction pages or just plain links. This reduces the problem space down to mostly synonymous re-directs which is what we want.

To run this script, you need to see line 11 and insert the phrases you want to retrieve synonyms for. Don’t leave spaces: replace them with underscores. The script also doesn’t check whether the phrase you’ve used is the canonical page either so that’s something you need to check for.

Once the script’s finished, load the CSV file into LibreOffice Calc (or some other form of spreadsheet that can load CSV files with Unicode), and delete anything that clearly isn’t a valid synonym for SEO purposes.

When that’s done, delete all the blanks and shift cells left (not up!), and you should have a spreadsheet full of nice synonyms that can enhance your SEO.

Happy scraping!

15 January 2013

Some SEO tips keywords

Here's a quick tip to get keywords to improve your search engine optimisation (SEO) using Wikipedia - for free! Enter your term into Wikipedia. If it's a brand name, enter the product type (e.g., "handbags').

Click on 'Toolbox' to the left and then 'What links here', and you'll be shown a new page that details all inbound links to that page within Wikipedia.

Then, under 'Filters', 'hide' both transclusions and links so that only re-directs to the page are shown.

And hey presto! There's a nice list of synonymous terms with a variety of spellings.

For example, handbag comes up with:


Clutch (handbag) (redirect page) ‎ (links)
Manbag (redirect page) ‎ (links)
Handbags (redirect page) ‎ (links)
Man bag (redirect page) ‎ (links)
Man-bag (redirect page) ‎ (links)
Manpurse (redirect page) ‎ (links)
Hand bag (redirect page) ‎ (links)
Hand-bag (redirect page) ‎ (links)
Hand-bags (redirect page) ‎ (links)
Man purse (redirect page) ‎ (links)
�� (redirect page) ‎ (links)
Evening bag (redirect page) ‎ (links)

whereas 'telescope' comes up with:

TeleScope (redirect page) ‎ (links)
Telescopes (redirect page) ‎ (links)
Perspicil (redirect page) ‎ (links)
Telescopy (redirect page) ‎ (links)
Astronomic telescope (redirect page) ‎ (links)
Telescopic observational astronomy (redirect page) ‎ (links)
Telescopically (redirect page) ‎ (links)
Astronomical telescope (redirect page) ‎ (links)
Ground telescope (redirect page) ‎ (links)