No, this isn’t another moan by yours truly about how the valuation people deal (in)correctly with states worse than the death on in health economics valuation exercises (phew).
This tweet interested me. There are all sorts of things you could do with a discrete choice experiment (DCE) to measure the trade-offs such patients make. When at UTS, we did a DCE that did two things, one novel and one not so novel. The first was an attitudinal one that found there are three segments among Australian retired people (our sample was around 1100 total) when you got them to tell you what statements about life they related to most and least – Best-Worst Scaling. We did something never done before – feed back to them their own results after that survey that they could print off, bring to their doctor to discuss, use as the starting point for and end-of-life care plan etc: results of this form a chapter in the book referenced. Of course the doctors at the sharp end in ICUs had warned us that thanks to TV programmes the general public has much higher expectations about the success/acceptablility of these dramatic interventions than is true in practice, but you could do the same survey with patients. In fact the bare bones of the survey are still live at the link and you can see how you compare with older Aussies.
The second DCE was (by DCE standards) very very simple, but was done to get a handle on the trade-offs people woul make regarding the kinds of interventions in the survey in this Twitter post and unfortunately won’t give you personalised results.
These types of DCEs should become routine. They can be done on touchscreen tablet PCs etc when the patient is waiting to see the doctor, they can give personalised results – not aggregated ones like in the bad old days. People like them, and like to know how they compare with others – the older generation love those surveys comparing them to others just as much as the younger “Facebook generations”. C’mon people, this survey is great and very very informative but we can move forward even further and do it today.
I and co-authors have a chapter in the just published book “Care at the end of life” edited by Jeff Round. I haven’t had a chance to read most of it yet, but from I’ve seen so far it’s great.
Chris Sampson has a good chapter on the objects we value when examining the end-of-life trajectory. It’s nicely written and parts of it tie in with my series on “where next for discrete choice valuation”, parts one, (which he cites), two, three, but particularly (and too late for the book), four.
The issue concerns a better separation of what we are valuing from how we value it. I came at it from a slightly different angle from Chris, though I sense we’re trying to get people to address the same question. It’s of increasing importance now the ICECAP instruments are becoming more mainstream. I’m often thought of as “the valuation guy” – yet how we valued Capabilities is intimately tied up with how the measures might (or might not) be used, as well as the concepts behind them. When I became aware that the method we used – Case 2 BWS – would not necessarily have given us the same as those from discrete choice experiments, part of me worried…..briefly. But in truth, I honestly think our method is more in tune with the spirit of Sen’s ideas. (Not to mention the fact we seem to be getting similar estimates, though I explain why this is probably so in this instance previously).
I have said quite a bit already in the blogs, but it’s nice to see others also coming at this issue from other directions. Anybody working on developing the Capabilities Approach must remain in close contact with those who are working on valuation methods.
Sorry, I should have posted this over a week ago – have had a cold and been working on a project.
But the paper on the perils of highly efficient designs in DCEs has been published! Wahay.
Personally I consider this paper to be the second best of my career (after the first BWS one in JHE)…ironic that I have left academia!
In terms of the content, some economists still don’t “get it” but that’s their problem really 😉
Well, this it turning out to be a good week. First I hear the BWS book was published on time.
Now a paper that two of the global experts on discrete choice experiments found ground-breaking has been accepted for publication! The paper had previously been rejected by a another good journal – a decision I (and, crucially, others) found unfair since the two referee reports did not appear to be recommending rejection (though one was fairly critical and the other didn’t really “get” a lot of what we did, it being work that challenged some tenets of the current economic orthodoxy).
Anyway the paper is called “Are Efficient Designs Used In Discrete Choice Experiments Too Difficult For Some Respondents? A Case Study Eliciting Preferences for End-Of-Life Care” by T.N. Flynn, Marcel Bilger, Chetna Malhotra and Eric Finkelstein and has been accepted by Pharmacoeconomics.
The background to the paper and its implications will form the second blog in the series I am writing about the future of health state valuation using DCEs. Suffice to say, it is revolutionary because we got respondents to answer TWO DCEs, which differed radically in their statistical efficiency – one was 100% efficient, the other 40-50% efficient. The theoretical advantages of the former were, however, heavily attenuated by the fact many respondents resorted to heuristics that didn’t represent their true decision rule – causing biased estimates. Twas a cool paper, even if I say so myself. Anyway I shall follow up on this later in the week, that’s enough for today.
In Birmingham for two days for the end-of-life (European) project concluding meeting plus advisory (steering) group meeting.
Truth be told, wish I weren’t. The Generalised Anxiety Disorder is out in full force at the moment so I am not really in a socialising mood.
Oh well. That’s life. This is my final obligation in terms of academic projects. From here on in you pay for my advice. Saw someone I see not enough of, which was great – I think she will be a useful source of advice on the consultancy front.
Otherwise it is:
- Former PhD advisor (supervisor)
- Former boss
- Former postdoc
- Some others I work indirectly with
Apologies for the downbeat post but I figured that as an “out mental healther” I should give the full story where possible.
Sent “official” email to collaborators a couple of days ago with my new job details and was touched at so many lovely comments and wishes for the future.
I’ve got to get on to practicalities now – I already had to buy a new suitcase in Singapore on my journey back here to transport back the heavy boots and big coats I bought there…I’ve worn neither on a regular basis since I left the UK 5 years ago. I’m also liaising with various groups of collaborators regarding funding applications. I’m particularly interested in getting European initiatives going – though these are NOT limited just to Europe-based researchers, so don’t want to exclude existing collaborators based elsewhere. The Horizon 2020 initiative has been mentioned to me. So any existing collaborators or ones who would like to participate, please get in contact. I’m interested in getting going projects in any of:
(1) Risky decision-making
(2) End-of-life decision-making
(3) Ethical issues in medicine, particularly cross-country comparisons
(4) Quality of life, particularly anything using the Capabilities Approach of Amartya Sen
(5) Response time models
(6) Agent based models and systems dynamics with DCEs embedded.
So get your thinking caps on, folks!
Not many posts recently I know – I’ve been in Europe doing various things that I’ll make public soon 🙂
Travelling to Amsterdam for the inaugural IAHPR conference on Friday. I’ll be presenting our initial work on response times in validating DCE estimate for the Australian end-of-life study. Then I’ll be travelling back to Sydney via Singapore.
I’ve had my abstracts on the supplementation of Case 1 Best-Worst Scaling (BWS) data with response time data accepted for both the IAHPR and SMDM conferences – woohoo!
This is extremely important work, done with collaborators at Newcastle University, NSW, Australia. It is the first validation of stated preference methods using physiological data – in this case, how long people take to choose their most and least preferred option. That research on response times uses the linear ballistic accumulator model – a model that has passed rigorous testing all the way from the lab with animals up through psych students, into real life looking at mobile phone choices, and now finally ascertaining how long it takes people to agree/disagree with attitudinal statements about end-of-life care.
This work is particularly interesting, as it appears, at first glance, to give insights into the “fast” and “slow” decision-making styles hypothesised by Daniel Kahneman. These styles will be crucial in helping clinicians and allied health professionals working in eliciting advance care plans – in particular, helping them understand what type of decision style influenced a particular choice made by a patient. We don’t, at this stage, make the normative judgements as to which decision-making style is “most valid”! But it gives the information necessary for society to decide what type should be used to construct an ACP.
The preliminary work showing the choice and attitudinal research results has a revise and resubmit to a journal and some similar work is in press in the best-worst scaling book (forthcoming, CUP).
Today I’ve been smoothing the waters to try to get another (and highly influential) partner involved in a partnership grant proposal for NHMRC. I’m hoping that this time round the various discussions on IP, potential commercialisation etc will be wrapped up long before the proposal is due in so we can have plenty of time to sharpen it up.
It will do several things, but one of which is build on the end-of-life care survey we built from the original pilot study. Delays have largely been due to work stuff I can’t go into, and personal stuff. With luck things will be smoother sailing from here on in.
Advanced care planning is important so your relatives, friends and doctors know what treatment you’d want if you’re unable to make your wishes known.
Yet most people don’t make out an advanced care plan (ACP).
We’re developing tools to help facilitate this and we have a tool to elicit your attitudes and, more importantly, feed them back to you in a way that may help you understand your overall views on care and how you compare to the general older population of Australians.
The survey is here – give it a go.