Category Archives: Surveys

Happiness isn’t quality of life if you’re old

The subject of happiness, particularly among older people, has come up (again) in the media. I reckon they trot out the latest survey results whenever there’s a slow news day. I think it’s no coincidence the newest stories have appeared in the slow month of August.

Anyway I shall keep this short as I’ll rant otherwise. Once again, neither happiness nor life satisfaction is the same as quality of life and we can argue til the cows come home as to which of the three (if any) is truly well-being.

First of all, if I can find the time to write up a follow-up to the paper I published on the mid 2000s survey of Bristolians I will show this:

Five year age bands showing mean levels (after rescaling) of self-rated happiness versus scored quality of life in Bristol

Five year age bands showing mean levels (after rescaling) of self-rated happiness versus scored quality of life in Bristol

The two track reasonably closely until retirement age. Then whilst happiness continues to rise, quality of life certainly does not. The wealth of other evidence on health, money, friends, etc from the survey suggests our QoL, the ICECAP-O instrument, is the better measure of overall well-being.

We are not the only ones to find this. A large US study pretty much concluded they didn’t know WTF older people were doing when they answered life satisfaction/happiness questions but they sure don’t answer them the same way that younger adults do. Older people use a different part of the numerical scale (typically a higher portion, all other things being equal). That’s rating scale bias and there is a huge and growing literature on it.

Stop asking these dumb questions. There are good alternatives.

 

 

CADR conference and psychometrics

My presentation yesterday at the Centre for Applied Disability Research conference seemed to go down well. There was one comment that I get on a frequent basis so I thought I’d give a more complete answer here. The question is always a variant on the following:

“Why should we use ICECAP-O or one of those instruments you’re touting when there’s the WHOQoL instrument or instrument x/y/z that has been validated already?”

A tag-on is often to the effect “how can your 5 item instrument beat our 10/15/50 item instrument which is bound to be better for individuals?”

Well the answer is simple – it comes down to a difference in paradigm, in particular the difference between psychometrics and random utility theory. I could be rude at conferences (but don’t) to counter the (occasionally a tad aggressive) attacks I get on ICECAP-O for “not being individual-specific”. My “slightly aggressive” response would be “actually it’s YOUR instrument that isn’t individual specific – psychometrics isn’t about the individual, it uses differences BETWEEN individuals to validate the response categories. Random Utility Theory (RUT) is EXPLICITLY a theory of how the individual makes choices and as such, any instrument based on it is by definition an individual level QoL instrument! For ICECAP-O (or any of the other instruments in the ICECAP family or the CES) I could, in theory, give any respondent THEIR OWN set of scores (a “tariff” to use health economics parlance), if they do the choice experiment. You CANNOT do that with existing instruments, with the exception of some health-based ones that use the time trade-off/standard gamble, IF they’d asked the right set of questions to concentrate on individual level scores.

This individual respondent tariff reflects the trade-offs THAT INDIVIDUAL would make between the items and how bad the various impairments are to that person. You can’t get that from any of these instruments I hear touted as being “superior” since they were validated on the basis of between person differences – by definition they cannot be tested/validated at the individual level. (Not least because there are no scores at that level, certainly not preference based ones that reflect how bad the impairments all are on a common scale).

So ICECAP-O and the other instruments beat them all when it comes to the issue of “the individual level”. We can feed back individual level scores – and indeed we did, for the end-of-life care survey, which you too can do if you click on surveys and go back a page or two. So not only are there 4 to the power 5 (1024) distinct utility values available – it is the PROFILE defined by the set of 5 answers that matters, not the number of questions – but these 1024 scores could be individual specific if we wanted. Indeed Chapter 12 of the forthcoming best-worst scaling (BWS) book (Louviere, Flynn & Marley – Best-Worst Scaling: Theory & Applications, CUP) will present subgroup Australian tariffs for ICECAP-O.

“Testing” ICECAP-O using psychometric based techniques may be invalid – I’m not sure – but one thing I am sure of. Stop throwing mud at us for having an instrument that is “obviously worse” than these existing large-item questionnaires – because I KNOW for a fact you’ve not tested the individual level properties of the scoring of these. At best we can all agree a truce and say there are two differing paradigms in use here and at present there’s been no properly designed study that uses a common denominator on which I could compare them.

health plus social care equals what

I’ve commented on a piece in the Guardian about the integration of health and social care in the UK and the options for individuals to have personal budgets.

I mentioned both the OSCA and ICECAP instruments – I wonder if all the millions of pounds of public money that went into developing these will actually come to fruition?

I hope those stupid happiness scores are not used – they’ve been debunked several times in several countries now, but certain arms of the British establishment seems dead set on them.

Aussies take note – there is a conference I am speaking at on this very subject in Sydney in late May.

last day plus icecap

Last day at UTS! Please note that from Monday I will be an employee of the University of South Australia (UNISA). My email is up and running (see posting elsewhere) and I have an office etc – working in North Sydney will be fantastic and I nosed around the office yesterday. It’s gorgeous – the operations team and some of the directors are already in place.

Also, if you use twitter you might like to follow the official ICECAP measure profile or @ICECAPm – it already has news of the next user group meeting at which I will be presenting.

For newbies, the ICECAP instruments use Sen’s Capabilities Approach as a framework for measuring and valuing well-being in a way that isn’t limited to “just” health. Thus, they are good alternatives to existing well-being instruments. The valuation is/has been done using the methods of another winner of the Nobel prize in Economics – Dan McFadden – discrete choice modelling. In particular the best-worst scaling variety in which I have led development in health and am a recognised global expert. BWS is now taking the world by storm and the book – to be published by CUP – will be finished in the next few weeks.

Best-Worst Scaling Book almost finished

Tony Marley will arrive in Sydney this weekend. He, jordan and I will finalise the BWS book for CUP during the two weeks Tony is here – sweet!

There will be empirical chapter(s) on using the best-minus-worst scores ( the ‘scores’) in analysis together with a chapter publishing the complete set of Australian socio-demographic related tariffs for ICECAP-O and (if collaborators permit it) the Canadian population tariff for the same instrument.

ICECAP-A UK scoring now available

For anyone wanting to use the ICECAP-A Capability instrument to measure and value well-being in the UK, we have the early view paper in Health Economics with scoring!

SCORING THE ICECAP-A CAPABILITY INSTRUMENT. ESTIMATION OF A UK GENERAL POPULATION TARIFF

Terry N. Flynn, Elisabeth Huynh, Tim J. Peters, Hareth Al-Janabi, Sam Clemens, Alison Moody, Joanna Coast,

It is the first to produce both the population tariff and proper adjustment for scale (variance) heterogeneity in the same paper (rather than doing the latter as a secondary analysis). It also shows that Brits are not all alike, there are different “types” who value different attributes of life.

end-of-life-care grant proposal

Today I’ve been smoothing the waters to try to get another (and highly influential) partner involved in a partnership grant proposal for NHMRC. I’m hoping that this time round the various discussions on IP, potential commercialisation etc will be wrapped up long before the proposal is due in so we can have plenty of time to sharpen it up.

It will do several things, but one of which is build on the end-of-life care survey we built from the original pilot study. Delays have largely been due to work stuff I can’t go into, and personal stuff. With luck things will be smoother sailing from here on in.

referenced in a blog

I was googling myself – as you do, and in this case to check that the youtube video with the two-naked-guys-plus-dildo thumbnail had been taken down/edited…it hadn’t – and found that my public lecture was blogged about back in 2010!

Thanks Lyrian! Hopefully I’ll get enough data from the recent survey to update the results.

PS neither of the guys in the video thumbnail is me and the video has nothing to do with me….honest! :-p

Your end-of-life care attitudes

Advanced care planning is important so your relatives, friends and doctors know what treatment you’d want if you’re unable to make your wishes known.

Yet most people don’t make out an advanced care plan (ACP).

We’re developing tools to help facilitate this and we have a tool to elicit your attitudes and, more importantly, feed them back to you in a way that may help you understand your overall views on care and how you compare to the general older population of Australians.

The survey is here – give it a go.