adaptive conjoint

I was interviewed for a podcast for MR Realities by Kevin Gray and Dave McCaughan a week or so ago. It went well (bar a technical glitch causing a brief outage in the VOIP call at one point) and apparently the podcast is doing very well compared to others.

One topic raised was adaptive conjoint analysis (ACA). This method seeks to “tweak” the choice sets presented to a respondent based on his/her initial few answers, and thus (the theory goes), “home in” on the trade-offs that matter most to him/her more quickly and efficiently. The trouble is, I don’t like it and don’t think it can work – and the last time I spoke to world design expert Professor John Rose about it, he felt similarly (though our solutions are not identical). There are three reasons I dislike it.

  1. Heckman shared the 2000 Nobel prize with McFadden: sampling on the basis of the dependent variable – the respondent’s observed choices – is perilous and often gives biased results – the long-recognised endogeneity issue.
  2. The second reason is probably more accessible to the average practitioner: suppose the respondent just hasn’t got the hang of the task in the first few questions and unintentionally misleads you about what matters – you may end up asking a load of questions about the “wrong” features.
    You may ask what evidence there is that this is happening. Well my last major paper as an academic showed that even doing the typically smallest “standard” design to give you individual-level estimates of all the main feature effects (the Orthogonal Main Effects Plan, or OMEP) can lead you up the garden path (if, as we found, people used heuristics because the task was difficult) so I simply, genuinely don’t understand how asking a smaller number of questions allows you me to make robust inferences.
  3. But it gets worse: the 3rd reason I don’t like adaptive designs is that if a friend and I seem to have different preferences from the model, I don’t know if we genuinely differ or whether it was that we answered different question designs that caused the result (estimates are confounded with design). And the other key finding of the paper I just mentioned confirmed a body of evidence showing that people do interact with the design – so you can get a different picture of what I value depending on what kind of design you gave me. Which is very worrying. So I just don’t understand the logic of adaptive conjoint and I follow Warren Buffett’s mantra – if I don’t understand the product I don’t sell it to my clients.

John Rose and Michiel Bilemer wrote a paper for a conference way back in 2009 debunking the “change the design to fit the individual” idea. Their solution was novel: the design doesn’t vary by individual (so no confounding issue) but it does change for everyone after a set number of questions. It’s a type of Bayesian efficient design, but requiring some heavy lifting to be done during the survey itself that most people would not be able to do.
Though I think it’s a novel solution, I personally would only do this to the extent that everyone has (for instance) done a design (e.g. at least the OMEP) that elicits individual level estimates, then after segmentation you could administer a second complete survey based on those results: indeed that would solve an issue that has long bugged me – how to you know what priors to use for an individual if you don’t already have DCE results for that individual (since heterogeneity nearly always exists)? But I also have a big dose of scepticism of very efficient designs anyway given the paper I referenced, and that is a different can of worms I opened 🙂