I recently had a mole removed by a GP with a special interest (GPSI) in dermatology. It was an interesting experience, given that the first ever discrete choice experiment I conducted elicited patient preferences for exactly this type of doctor and specialty.
The study was piggy-backed onto an early (the first?) trial of GPSI care. That trial established equivalence of care with the traditional consultant-led secondary care model (for the large proportion of cases that are routine enough for GPSI care to be appropriate). The DCE, however, showed resistance to GPSI-type care among patients, on average. Now, this was unsurprising: we knew no better and quoted average preferences, which mean nothing usually in DCEs (since you are averaging apples and oranges). Subgroup analyses I did established which patient subgroups were open to GPSI-type care (and when), and those results were all very predictable.
It is the wording we were strongly encouraged to use for the attributes (such as the doctor description etc) that is the subject of this post, particularly in the light of my personal experience of such care “at the sharp end”. We did not use the actual job titles of the doctors: had we done so, we would have given the respondents the choices between “seeing a member of a consultant-led team, which may or may not be the consultant him/herself” versus “seeing a GP who has had (considerable?) special additional training in dermatology”, making it clear that (1) many people don’t see the consultant, contrary to what they believe, and (2) a GPSI is perfectly qualified to deal with their condition and if anything non-routine is found, they are instantly moved to the consultant-led team’s care.
Now, I know why the triallists didn’t like this: patients see “GP” and instantly form (often incorrect) opinions. That was brought home to me when I saw a doctor at the local hospital in Nottingham (actually a private treatment centre subcontracted by the NHS): he never revealed he was a GPSI until we started “talking shop” and suddenly his ID badge was held up in front of me with the exclamation “I was one of the first GPSIs in dermatology appointed!” My referral letter said I would see (consultant) Dr X or a member of his team. Hmmmm. Thankfully I had no preconceptions, and received top notch care – I would certainly see him again if I needed to. (Of course I looked up this GPSI subsequently and it turns out he specialised in surgery first before moving to General Practice to improve conditions for family life, so he was particularly well qualified.) But it did illustrate, albeit anecdotally, that what was really required was a DCE with “labels” (the actual doctor type”) to capture the true patient preferences: that would focus minds on the need for a public education campaign to reduce the stigma associated with GPSIs. What we did, although not misleading in terms of describing the doctors, brushed the underlying problem under the carpet. (So we should have run a labelled DCE – we knew no better then but I am using my own experience to illustrate a serious problem here that continues unabated in health. That’s for another day, however.)
The other attribute I would, with the benefit of being an actual patient, change was location of care. The DCE heavily implied that non-hospital care would be a local general practice. Now, of course, if your general practice doesn’t have the facilities to do minor surgery then this may be grossly misleading. Indeed I had to travel further than the local hospital to get to the GPSI’s surgery for my mole removal. As it happens it didn’t matter: distance as the crow flies was not the important factor in my ability to get there. However, it immediately made me slightly annoyed at the guidance I as the DCE lead received when I did the study. The wording we used was, again, “technically correct” in that the choice was between a place of care that was convenient and local versus not, but I’m fairly sure a non-trivial number of our respondents could have made incorrect assumptions about these attribute levels. I know I did, and I ran the DCE!
It made me a bit (more) cynical about the motives of certain parts of academia: I’d already seen via twitter a much heralded result of a trial I know about that, shall we say, could have been improved upon immensely. Furthermore, I had pause for thought recently when I learnt that some members of industry consider academia-led literature reviews and so-called systematic reviews in certain areas of health to be not worth the paper they’re written on. (I can concur on that regarding recent reviews in my own field). In a time that has seen a huge amount of industry-bashing for selective release of information/publication it really does act as a reminder that some areas of academia need to take a good hard look at their own conduct. Plus, just to be fair, I do shout out about the amazing groups I have worked with or continue to work with. I just feel Ben Goldacre and Danny Dorling were bang on the money in their beliefs (informed by different evidence, which was particularly damning) that bad practice by academia and its associated institutions contributes to the general lack of confidence by the public in the “elites” and how “having your own facts”, whilst of course ludicrous, is a perfectly understandable public reaction to elites that no longer seem to uniformly put the public good first.
As usual I shall make the caveat that there are great groups I work with and this isn’t just “academia bashing”. I just offer constructive criticism based on my own experiences (and mistakes) and give examples of the kind of lack of transparency that cleverer people like Ben and Danny have highlighted as barriers to getting academia more support among the general populace.