Tag Archives: academia

Encounter with a GPSI

I recently had a mole removed by a GP with a special interest (GPSI) in dermatology. It was an interesting experience, given that the first ever discrete choice experiment I conducted elicited patient preferences for exactly this type of doctor and specialty.

The study was piggy-backed onto an early (the first?) trial of GPSI care. That trial established equivalence of care with the traditional consultant-led secondary care model (for the large proportion of cases that are routine enough for GPSI care to be appropriate). The DCE, however, showed resistance to GPSI-type care among patients, on average. Now, this was unsurprising: we knew no better and quoted average preferences, which mean nothing usually in DCEs (since you are averaging apples and oranges). Subgroup analyses I did established which patient subgroups were open to GPSI-type care (and when), and those results were all very predictable.

It is the wording we were strongly encouraged to use for the attributes (such as the doctor description etc) that is the subject of this post, particularly in the light of my personal experience of such care “at the sharp end”. We did not use the actual job titles of the doctors: had we done so, we would have given the respondents the choices between “seeing a member of a consultant-led team, which may or may not be the consultant him/herself” versus “seeing a GP who has had (considerable?) special additional training in dermatology”, making it clear that (1) many people don’t see the consultant, contrary to what they believe, and (2) a GPSI is perfectly qualified to deal with their condition and if anything non-routine is found, they are instantly moved to the consultant-led team’s care.

Now, I know why the triallists didn’t like this: patients see “GP” and instantly form (often incorrect) opinions. That was brought home to me when I saw a doctor at the local hospital in Nottingham (actually a private treatment centre subcontracted by the NHS): he never revealed he was a GPSI until we started “talking shop” and suddenly his ID badge was held up in front of me with the exclamation “I was one of the first GPSIs in dermatology appointed!” My referral letter said I would see (consultant) Dr X or a member of his team. Hmmmm. Thankfully I had no preconceptions, and received top notch care – I would certainly see him again if I needed to. (Of course I looked up this GPSI subsequently and it turns out he specialised in surgery first before moving to General Practice to improve conditions for family life, so he was particularly well qualified.) But it did illustrate, albeit anecdotally, that what was really required was a DCE with “labels” (the actual doctor type”) to capture the true patient preferences: that would focus minds on the need for a public education campaign to reduce the stigma associated with GPSIs. What we did, although not misleading in terms of describing the doctors, brushed the underlying problem under the carpet. (So we should have run a labelled DCE – we knew no better then but I am using my own experience to illustrate a serious problem here that continues unabated in health. That’s for another day, however.)

The other attribute I would, with the benefit of being an actual patient, change was location of care. The DCE heavily implied that non-hospital care would be a local general practice. Now, of course, if your general practice doesn’t have the facilities to do minor surgery then this may be grossly misleading. Indeed I had to travel further than the local hospital to get to the GPSI’s surgery for my mole removal. As it happens it didn’t matter: distance as the crow flies was not the important factor in my ability to get there. However, it immediately made me slightly annoyed at the guidance I as the DCE lead received when I did the study. The wording we used was, again, “technically correct” in that the choice was between a place of care that was convenient and local versus not, but I’m fairly sure a non-trivial number of our respondents could have made incorrect assumptions about these attribute levels. I know I did, and I ran the DCE!

It made me a bit (more) cynical about the motives of certain parts of academia: I’d already seen via twitter a much heralded result of a trial I know about that, shall we say, could have been improved upon immensely. Furthermore, I had pause for thought recently when I learnt that some members of industry consider academia-led literature reviews and so-called systematic reviews in certain areas of health to be not worth the paper they’re written on. (I can concur on that regarding recent reviews in my own field). In a time that has seen a huge amount of industry-bashing for selective release of information/publication it really does act as a reminder that some areas of academia need to take a good hard look at their own conduct. Plus, just to be fair, I do shout out about the amazing groups I have worked with or continue to work with. I just feel Ben Goldacre and Danny Dorling were bang on the money in their beliefs (informed by different evidence, which was particularly damning) that bad practice by academia and its associated institutions contributes to the general lack of confidence by the public in the “elites” and how “having your own facts”, whilst of course ludicrous, is a perfectly understandable public reaction to elites that no longer seem to uniformly put the public good first.

As usual I shall make the caveat that there are great groups I work with and this isn’t just “academia bashing”. I just offer constructive criticism based on my own experiences (and mistakes) and give examples of the kind of lack of transparency that cleverer people like Ben and Danny have highlighted as barriers to getting academia more support among the general populace.

happy new 2016

A slightly early happy new year post!

It has certainly been a bit of a roller-coaster year for me – finishing in Australia, spending 5 months in Sweden, before returning to my home town to set up my own company. Things are settling down nicely now (thankfully).

It would not be appropriate for me to air dirty linen in public about some of my experiences in academia. However, suffice to say, in private conversations with a lot of my former colleagues I heard similar complaints and frustrations with the way the profession is going. Of course, not every institution suffers from the same problems; I’m sure there are some places that have very few, and the universities I worked at all had excellent qualities, each one it its own way. Indeed perhaps the problems stem more with the profession more generally, and the funding environment, which forces the universities simply to react in certain ways.

My hope for 2016, to help my former colleagues, is that professional organisations are more supportive of researchers, that credit is given where it is due, and that funding agencies no longer favour certain groups simply due to their wider institutional reputation, rather than technical expertise. One gripe I will mention is the very marked deterioration in citation that I saw in my field: over the last 5-10 years I increasingly saw practices that would certainly fail PhDs, and maybe even undergraduate theses. I saw papers that seemed like they were rushed to a shocking degree (with no proper literature review), or alternatively were deliberately referencing certain other papers to game the funding systems (which, in a number of countries, increasingly rely on citations).

As a final thought I will return to the issue of how groups within academia are being pushed to work, via funding and incentive structures: they must act like consultancies, with key performance indicators and applied (as opposed to theoretical/blue skies) outcomes. My response to that was to become a consultancy myself: after all, why would potential clients pay for layers of administration and a quasi-consultant when they can get the real deal for a much cheaper price?

Of course that can be taken as plug for my services, but hey we all need to make a living 🙂

And on that note, happy new year!

I’m to be a Professor

From 1st January I am to be appointed Professor of Medical Decision Making at the Centre for Research Ethics and Bioethics at Uppsala University in Sweden.

I am so happy and proud to be able to work with such a positive, dedicated group of researchers and at such a globally renowned university in the field of biotechnology and health research. I have just returned from a lovely 3 days in Uppsala – I was particularly pleased that the researchers are open-minded and have particular strengths in the ethics and psychology of human decision making in health. I will be responsible for growing a centre of excellence in discrete choice experiments in health care, with a particular focus on 21st century issues in choice – such as the implications of genetic testing.

I am very appreciative of the methodological expertise I have gained whilst in Australia. It’s a shame I have to leave, but that’s life.

Institute for Choice to launch

The new Institute for Choice will launch in Sydney! As the press release says, we will bring together the highest concentration of academic expertise in choice behaviour research globally (including Jordan Louviere, Joffre Swait, John Rose and Tony Marley).

The official opening will take place in Sydney and Adelaide soon.

I’m going to be submitting proposals on various health themes including my core areas of:

  • Valuing quality of life / wellbeing using Amartya Sen’s Capabilities Approach
  • End-of-life care
  • Public policy and how decisions can be modelled systematically to help society

More to follow soon.


quality vs quantity in academic publishing

OK, Richard and Chris have each weighed in on this debate in the light of a twitter exchange that Chris drew attention to.

My own take is that those proposed solutions go at the symptoms (the incentives) rather than the underlying disease: inadequate time and resourcing. I’ll use my own experience as an example. The UK MRC Health Services Research Collaboration ran for around a decade, covering most of the noughties. I worked in it (and as part of the group it comprised after its formal closure) from 2001-2009. What was refreshing was the culture – no pressure for short-term benefits. I had annual trips funded to Sydney to go work with Jordan Louviere on developing best-worst scaling. I produced nothing of substance for 6 years on a methodology that came to define my early to mid career. My citations in my first post-doc positions were, frankly, crap compared to my peers and many postdocs today – theirs go up linearly, mine were atrocious.

Then an interesting thing happened: whilst theirs continued to go up linearly mine went exponential; now my citations and bibliometric indices beat those of many professors of health economics nationally and internationally. Furthermore compared to many of my peers a relatively large proportion of my papers contribute to my h-index, something that is less and less true the more senior you become (unless you are an Einstein and every paper is paradigm-shifting).

I spoke to senior members of the UK Department of Health about BWS in the early noughties. The working paper that became my seminal 2007 publication was specifically cited in a call for proposals by the UK HTA NIHR for work to develop a social care instrument. I was then the methodological leader in the team that got the successful bid.

Suffice to say I’ve had a large impact with my work – both in research and policy. I’m not the sharpest knife in the drawer – I stand on the shoulders of giants like Tony Marley and Jordan. However, I had a lot of time to develop the research. I wonder if we had things right in the “good old days” when academics had far fewer of the time and resourcing constraints they have today? Yes I know it’s almost certainly a lost cause I’m espousing given the current way government is funded (i.e. incorrectly).