Two papers in the past week have given me need to remind people that efficient designs in DCEs may not be the bee’s knees. I posted a while back when my paper showing this was accepted and gave more detail, which ultimately became part of my SSRN paper, after publication.
I’ll quote from my latter post:
Street and Burgess had begun to provide CenSoC with designs whose efficiency was 100% (or close to it), rather than 30-70%. We loved them and used them practically all the time. In parallel with this, John Rose at the Institute of Transport and Logistics Studies at Sydney University had begun utilising highly efficient designs – though of a different sort. However, what efficient designs have in common – and really what contributes heavily to their efficiency – is a lack of level overlap. This means that if the respondent is presented with two pairs of options, each with five attributes, few, and in many cases none, of those attributes will have the same level in both options. Thus, the respondent has to keep in mind the differences in ALL FIVE ATTRIBUTES at once when making a choice. Now, this might be cognitively difficult. Indeed John Rose, to his immense credit, made abundantly clear in the early years in a published paper that his designs, although STATISTICALLY EFFICIENT, might not be “COGNITIVELY EFFICIENT”, in that people might find them difficult (pushing up their error variance) or, even worse, use a simplifying heuristic (such as “choose the cheapest option”) in order to get through the DCE. (Shame on us CenSoCers for not reading that paper more closely.) Clearly in the latter case you are getting biased estimates – not only are your parameter estimates biased (in an unknown direction) but the functional form of the utility function for such respondents is wrong. Now John merely hypothesised this problem – he had no empirical data to test his hypothesis, and recommended that people go collect data. For many years they didn’t.
My study was the first within-subject study to be published (though I know of at least one other within-subject study that was doing the conference rounds at about the same time and may well have been published since). It certainly has influenced my thinking and the paper in the current AHE Blog has found – although I believe using betwen-subject study only – that yes, efficient designs for “complete EQ-5D-5L described lives” seemed to cause problematic beta estimates. They advocate two-step designs – something a group I’m working with are already doing….hopefully we will have some interesting stuff to present next year at a EuroQoL Group meeting.
I shall simply end with a warning I put in the SSRN paper concerning efficient designs (which certainly have their place, don’t get me wrong, but you can’t use them unthinkingly):
Of course if it turns out that greater precision has been gained at the expense of bias, then efficient designs replace what is merely an annoyance with a crippling flaw.