This post regards a twitter post with an interesting poll and discussion initiated by Chris Carswell (editor of Pharmacoeconomics and The Patient) and twitter handle @PECjournal on whether a statement should be added to a paper to the effect that the authors’ model, when requested, was not submitted for peer review.
I abstained, saying I think a statement should be made if it’s a “traditional” decision analytic/similar CEA/CUA but I personally don’t favour it for DCEs.
The two counter-arguments made were that:
- Proprietary models go against the spirit of transparency that is increasingly demanded, &
- My point that model selection for DCEs being part art is similar to that used in qualitative research but qualitative researchers still have to submit discussion guides/full survey.
I do acknowledge both points, but my responses would be as follows:
(1) Proprietary software is routinely used to generate designs and (particularly) to analyse results of economic and other models: we’re getting into the nitty-gritty of the likelihood maximisation routine used (EM algorithm/other etc), starting value routines used internally by the stats program, etc. The ultimate black box is the stuff that does everything for the novice/inexperienced DCE researcher, mentioning no names 😉
Now, that doesn’t make things right, but it does mean that unless the researcher has the full code for everything from DCE design to model selection, or can reference it all for reviewers, I don’t think picking on just the DCE model selection issue is fair.
(2) I have no objections to submitting the design of the survey – when I was a reviewer, most fatal errors were made in the design and take the view that no DCE can be properly reviewed without access to the design by reviewers. (Another reason why authors might like to rethink if they are going to use “adaptive conjoint” – are they going to provide the design administered to every respondent? Haha, thought not, and if they do, will reviewers check through such a model, involving programming it in their software. Haha, thought not.) I myself also provide details of the main and secondary analyses I conducted. These can all be reproduced by reviewers, if they want to. The difficulty – and I believe, from my (far more limited, I acknowledge) experience/observation of analysis of qualitative data that it’s the same there – is that value judgments are made: e.g. “have we really reached saturation?” etc. For the reviewer it comes down to “in my experience, do I agree with this?”
And, unfortunately, in my experience in academia, too few peers had sufficient experience – and I mean designing, analysing and interpreting DCEs across multiple fields – to possibly feel comfortable endorsing me when I say “I didn’t use the model dictated by the BIC criterion – or whatever statistical rule you may like – because it routinely gives too many latent classes and I used my experience to choose the best model”. Sorry, yes I sound arrogant, but when any one DCE has literally an infinite number of solutions – a point still ignored or misunderstood by most practitioners – then inevitably experience and gut feelings based on intimate knowledge of your sample, data and survey become paramount.
In short, model selection skills can’t be taught, they must be gained with experience.
And, you are fully entitled to say “well you would say that, you work in industry now”. To which I’d respond, yes, I do have an interest in saying that, but why are academic groups that routinely delay competitor groups’ papers, mis-reference things in order to skew publication metrics and funding likelihood etc not pulled up on their shenanigans? I got a google citation report just today to something – and seeing the authors I would have bet (before reading) 100 GBP with anyone on the planet that the paper of mine that was absolutely crucial to this new publication would not be the citation I got the report for. I would have won the bet, the citation was to something else of mine entirely. I just laugh at these things now, they don’t affect me or my business, but it’s rather sad that they still go on. Particularly in this case when it can contribute to more QALY valuation studies that can’t possibly give the right answer – how is that defensible on equity or efficiency grounds?
So, until basic rules of research – and we’re talking the stuff I was taught in my first PhD supervision like “get the primary source”, not even the more recent transparency stuff – are followed consistently by academics I’m afraid industry is entitled to retort “people in glass houses shouldn’t throw stones”.