One of the blogs I follow is Math Babe and she has just published a book on (amongst other things) the problems with big data (which I intend to buy and read as soon as I get the time). The Guardian reprinted some of it, which is great for bringing this to a wider audience.
I left the following comment at her entry which mentions the Guardian article, but I think it might have disappeared into moderation purgatory as my first attempt to post “from an account” was the WordPress.com one which I don’t use (as opposed to this .org). Anyway the gist of what I said was that she is entirely right to lambast the use of automatic rules and algorithms to analyse (for instance) personality data used in recruitment. However, smart companies (1) don’t use psychometric data, they use Best-Worst Scaling which cannot be “gamed” and (2) use human input to interpret the results. Anyway here’s my comment to her blog post…EDIT – the comment appeared, hooray!
Hi. Nice article and I intend to get and read the book when things calm down a little in my work life. I just have two comments, one that is entirely in line with what you have said, and one which is a mild critique of your understanding of the personality questionnaires now being used by certain companies.
First, I agree entirely that the “decision rule” to cut down the number of “viable” candidates based on various metrics should not be automated. Awful practice.
Second, and where I would disagree with you, is in the merits of the “discrete choice” based personality statements (so you *have* to agree with one of several not very nice traits). This is not, in fact, psychometrics. It is an application of Thurstone’s *other* big contribution to applied statistics, random utility theory, which is most definitely a theory of the individual subject (unlike psychometrics which uses between-subject differences to make inferences).
I think you may be unaware that if an appropriate statistical design is used to present these (typically best-worst scaling) personality-trait data then the researcher obtains ratio scaled (probabilistic) inferences which must, by definition be comparable across people and allow you to separate people on *relative* degrees of (say) the big 5. Thus why they can’t be gamed, and why I know of a bank that sailed through the global financial crisis by using these techniques to ensure a robust spread of individuals with differing relative strengths.
If two people genuinely are the same on two less-attractive personality traits then the results will show their relative frequencies of choice to be equal, and those traits will have also competed against other traits elsewhere in the survey (and probably appear “low down” on the latent scale). So there’s nothing intrinsically “wrong” with a personality survey using these methods (see work by Lee Soutar and Louviere who operationalised it for Schwartz’s values survey) – indeed there is lots to commend it over the frankly awful psychometric paradigm of old.
I would simply refer back to my first point (where we agree) and say that the interpretation of the data is an art, not a science, and why people like me get work in interpreting these data. Incidentally and on that subject, I can relate to the own-textbook-buzz, mine came out last year. Smart companies already know how to collect the right data, they just realise they can’t put the results through an algorithm.