Category Archives: Economics

bad academia

I’ve written before about some poor papers I’ve seen/reviewed/refereed in academia.

Unfortunately the problem is becoming worse not better. I recently read articles that showed absolutely no awareness of key articles – we are not talking articles that could legitimately be “missed” in a literature review because of issues regarding different terminology etc and being in unrelated disciplines. We’re talking articles that are:

  • In the same field or a a field that is now routinely scoured for info, being recognised as one which generates a lot of the major methodological developments
  • Open source so can be read by literally anyone
  • Co-authored by someone who would and should and have got a Nobel Prize, had there been one in his discipline….and who was (graciously) mentioned in the Nobel lecture of an Economist who *did* get the Prize but who recognised the theory had been proven elsewhere at least as early as in economics.

This begs the obvious question:

WHY DID THE AUTHORS IGNORE A PAPER THAT SAID EVERYTHING THEY SAY….AND SAID IT 10 YEARS AGO?

I’m afraid I have only two explanations:

  1. The authors deliberately ignored the key article, to justify their work to funders, or
  2. The authors are incompetent, missing an article that were a first year undergrad to miss in a summer dissertation, I’d have failed them instantly.

So, what is it?

I hesitate to harp on about my education, but sometimes you must call a spade a spade. I was taught economics, obtaining a 2.1 at Cambridge University where I was taught Marx, mathematical economics that went beyond what my MATHS friends were doing in their degree, high level statistics, and proper literature reviewing. The latter was reinforced and drummed into me in my MSc in Health Economics at York and PARTICULARLY during my PhD at Bristol in Social Medicine (specialising in medical statistics and health economics).

I taught med students….which was a horrifying experience in terms of realising how little they understood calculus (having supposedly got an A or maybe at worst B grade in maths A level). Now I won’t get preachy – stuff that was on the single maths syllabus from 1960 was on the further maths syllabus of my time (1990-1991). So I know A levels had already been dumbed down to some extent. But there’s a critical threshold, beyond which the candidate is, well, retarded…..sorry for un-PC term….but I’m fed up of excuses for these people. They get away with work that is no better than the “fake news” they excoriated in daily pollution of my Twitter Feed….and it has to stop.

Sorry, you’re shit. Your work is sub-standard and you shouldn’t, absolutely shouldn’t, be in academia. You’re simply not up to it.

And people wonder why I get angry at the rubbish I’ve had to read through for years; why I don’t do anything academic anymore. There are a few very talented groups who I exempt from the above – you know who you are.

Others – WHY are you doing what you are doing? How do you sleep at night?

semi-retiring from blogging

Unfortunately I shall be semi-retiring from blogging.

When I say “semi”, I mean that general discussions on my personal website and comment on my personal twitter account will become few and far between. I shall continue to make comments/blogs on my work account.

There are several, in some cases related, reasons:

(1) Standards of practice in DCEs are not improving in health. It’s profoundly depressing when you read a blog entry/article/op-ed that has you nodding fiercely – as just happened – and then you get to the central defence of the paper. And it involves a discrete choice experiment that has not followed proper practice and stands a non-trivial chance of being totally wrong.

(2) Standards of literature review are appalling and getting worse by the year. When I did my PhD you wouldn’t dream of submitting a paper that didn’t show awareness of the literature – particularly if key aspects of your design have been heavily criticised by others.

(3) I get the distinct impression “political arguments” are trumping “data”. This partly follows on from (2): it’s well-known and established why quota sampling is important in DCEs yet “population representative sampling” continues to be used as an “advantage” (ha!) of DCEs done in the field of QALY Decision-making.

If this makes no sense to you then can I respectfully suggest you need to go do some reading?

If you don’t know the finding (from the mid 1980s) that heteroscedasticity on the latent scale is a significant problem in terms of bias, and how it matters in QALY studies, then it makes me think you have a rather large hole in your statistical knowledge and worries me immensely.

I won’t name names, in the interests of discretion, but I’m tired of making this point year in year out, with no result (with the honorouble exception of the EuroQoL Foundation who funded a group I am part of to look at this)….and I showed it empirically in the BWS book. Please read the health chapters to understand this. I’m open to questions by email if you don’t understand the logic.

(4) I spent a lot of my own money showing how attitudes are related to preferences in terms of politics…..which got me zilch…..the media are lemmings….they’d rather all jump off the cliff together than report something different (and based on stronger assumptions) and risk being “the one who was wrong”. Again, lack of statistical training, noted already by people like Ben Goldacre.

So I’m afraid I’m a little tired of all this. I have a business to run. Parents to do a lot of stuff for.

I’m still here on email – ask me if you’re puzzled. I’m not trying to be obstructive here. But I need to concentrate on putting food on the table.

All the best,

Terry

Best-Worst voting the answer?

With the truly appalling outcomes for Labour and Lib Dems – compared to where they need to be to be competitive in the General Election in a few weeks – maybe it is time to start thinking about electoral reform again.

Let’s start with that old trope from the LibDems – “fair votes”. Kenneth Arrow got a Nobel prize for proving there’s no such thing. Stop using the term. You decide what are the key welfare criteria you want from your system, then you can choose a voting system that delivers those (and probably not the “unimportant criteria”).

Now, we know there is a strong desire in the UK to preserve the link between “an MP” and “a constituency”. Fair enough. But the Alternative Vote – defeated in the referendum a few years ago – is not the only, or indeed perhaps even best, replacement for first-past-the-post (FPTP)

Tony Marley – co-author on the BWS book with me – has written a lot about the maths behind voting systems. People don’t realise Best-Worst Scaling works as a voting system. Plus I reckon it’d be attractive in the UK.

Here’s an example of how it might work, and deliver a different outcome to that observed in the results just published in the Local Election for the TEES VALLEY.

FIRST ROUND RESULTS:

  • CON – 40,278
  • LAB – 39,797
  • LD – 12,550
  • UKIP – 9,475

SECOND ROUND RESULTS (TOP TWO GET 2nd PREFS):

  • CON – 48,578
  • LAB – 46,400

So what happened? It’s pretty obvious most UKIP 2nd prefs went Conservative – their boost is suspiciously close to the UKIP vote. Of course we know UKIP has also poached from Labour in LEAVE-dominated northern seats, but I doubt many “kippers” put LAB as 2nd pref.

Where are the rest of the 2nd prefs?

About 7,000 are missing in action. Maybe people just refused to put a 2nd preference or gave them to fringe parties.

But I bet they knew what party they hated most.

Here’s how it might have played out under BWS:

  • LAB and LD voters encouraged to put Conservatives as “least”
  • UKIP put Labour (primarily) as “least” – some will put LD
  • CON put LAB as “least”

Result:

  • CON “lose” around 52,000 (LAB/LD) votes
  • LAB “lose” around 50,000 (CON/UKIP) votes

LIBDEM gain – or, if UKIP and some CON voters hate the LDs sufficiently (for their pro-Europe stance) even more than they hate Labour, then the “least” Labour vote leaves their net total beating the LIBDEMs. Either way the Conservatives don’t win – the UKIP/Conservative vote simply isn’t enough to offset both Labour and the LDs.

Of course with turnout around 21% a LOT more potential votes are up for grabs if people are energised to believe their vote(s) matter.

Worth thinking about.

 

 

BREXIT survey stuff on work account

Just a reminder that the results of my Best-Worst Scaling survey which showed what would happen if we could know the (LEAVE/REMAIN) view of every eligible voter in the UK is on my work account.

Most follow-up – regional variation, recommendations as to which type of BREXIT are preferred by whom, how 8% of that 28% who never turned out to vote could have held the key to everything – will be on that account too.

Some interesting observations from the raw data – and remember we can look at an individual’s responses here, because BWS gave us 10 data points to estimate 5 parameters:

  • The East Midlands, although heavily LEAVE, skews quite heavily toward a different type of BREXIT to other LEAVE regions.
  • The strong preference for free trade is simply not there….it has shifted – VERY heavily – toward the free movement of people throughout Europe. This “strong positive liking of immigration” is visible nowhere else. The non-English countries/principalities (Wales, Northern Ireland and Scotland) have a broadly neutral view on immigration. The non East-Midlands part of England strongly dislikes it
  • East Midlanders also have a strong antipathy toward several key aspects of the EU – in fact the pattern of their dislikes looks remarkably consistent with a “Swiss form of BREXIT” – one of the so-called “soft” BREXIT options.
  • They also are the region which loathes the EU budget contribution the most.
  • Their results form a remarkably realistic view, compared to some other segments of British society: they (we – am a Nottinghamian) seem quite happy to sacrifice elements of the single market and the customs union, plus we’ll adopt a constructive view on immigration with our European neighbours if it means we “get some money back”. We’ll also compromise on free trade quite happily.

So what gives? Has everyone round here had some secret training in Ricardo’s work, thus recognising when free trade is not welfare-enhancing?

BREXIT-REMAIN redux

eu_support_graph copy

 

Well, I’ve finally got round to programming a model that:

  • Asks you just five best-worst scaling questions – you choose your “most agreed with principle” and “least agreed with principle” – people take 2-3 mins to answer this tops.
  • Runs a best-worst scaling (BWS) exercise on just YOUR five answers.
  • Spits out three things:
    • A pie chart showing how likely each of the six main options (continued EU membership/Norway option/ Switzerland option/ Canadian option/ Turkish option/ World Trade Organisation option) would best satisfy YOUR principles
    • A pie chart showing the predicted chances of you personally supporting each of the five principles
    • A pie chart showing the predicted chances of you personally rejecting each of the five principles

 

 

 

 

Thus, the first chart tells you, based on which of these five principles we could “get” under each of the six models, what are the chances of getting “as much as we want” from each model of a new British-European relationship – the six models (one REMAIN, five BREXIT) .

This, like all CORRECT best-worst scaling, is an individual model, giving you PERSONALISED results, not “you averaged with others”.

We can, of course, average across people, slice and dice the results across sex/gender/political affiliation etc, to find out what model is most popular in certain groups. But the point is, my model doesn’t NEED to do that. All because just five BWS questions tell me everything I need to know about what you value.

Gold dust for all the campaigns – and the government, as it struggles to negotiate what type of new relationship would command majority support in the country.

I have deliberately answered the survey as a “hypothetical REMAINer” to show what they should have done – namely made the single European market something people understood and fought for, above other factors.

There are lots of scenarios – including what probably actually happened in that people were in reality “sure” they disliked free movement of people and/or EU budget contributions but unsure about their SEM/FTA/CU support – which lead to a BREXIT outcome as the most likely to achieve their preferences….your relative preferences for these determines which BREXIT model (hard/soft) is most likely to suit you.

Campaign managers/constituency parties/national party executives as well as Jo(e) Public would be very interested in this.

 

Best-worst capabilities endorsed

Wow. In this article Will Hutton interviews Amartya Sen. A crucial quote:

“…you have to take in, somehow, the unattractiveness of the last as well as the attractiveness of the first candidate.”

 

Wow, quantifying the worst as well as the best?

Which group has been at the forefront world-wide of doing this?

Yep, we’ve been way ahead of our time.

EU inequality

OK I’m breaking my self-imposed law within a few hours.

Ben says EU good

Ben says EU good

 

 

 

 

I usually have utmost respect for Ben Goldacre and don’t want to get into trolling territory on twitter but this is a simplistic statement. The first statement is true. The second is highly debatable if you stratify by age.

It is well known (see Bill Mitchell amongst a wealth of others, many of whom could not be seen as “outsiders” but are well within the mainstream) that unemployment in southern EU countries is appalling amongst the young. 50% or so. People with PhDs living at home with parents and, if they’re lucky, doing some barista work. All courtesy of the banking rules that force them to “live within their means – like a household”. All a nonsense paradigm of course if you understand how money is created and destroyed. But the results are in and have been in for many years now. There is, of course, a strong affinity with the EU, given the benefits of the past. However, recent ECB policy means the young can’t afford a home, and get bare-bones healthcare.

effects or dummies redux

That old bugbear comes back….are effects codes really superior to dummy variables?

Abstract

This note revisits the issue of the specification of categorical variables in choice models, in the context of ongoing discussions that one particular normalisation, namely effects coding, is superior to another, namely dummy coding. For an overview of the issue, the reader is referred to Hensher et al. (2015, see pp. 60–69) or Bech and Gyrd-Hansen (2005). We highlight the theoretical equivalence between the dummy and effects coding and show how parameter values from a model based on one normalisation can be transformed (after estimation) to those from a model with a different normalisation. We also highlight issues with the interpretation of effects coding, and put forward a more well-defined version of effects coding.

That’s one of the joys and frustrations of DCEs; why you can never rest on your laurels and should really be acknowledging that it is a field in its own right; why you should have a DCE expert on your team for all important projects. Just when you thought something was right, its merits are questioned. Fun fun fun.

model disclosure

This post regards a twitter post with an interesting poll and discussion initiated by Chris Carswell (editor of Pharmacoeconomics and The Patient) and twitter handle @PECjournal on whether a statement should be added to a paper to the effect that the authors’ model, when requested, was not submitted for peer review.

I abstained, saying I think a statement should be made if it’s a “traditional” decision analytic/similar CEA/CUA but I personally don’t favour it for DCEs.

The two counter-arguments made were that:

  1. Proprietary models go against the spirit of transparency that is increasingly demanded, &
  2. My point that model selection for DCEs being part art is similar to that used in qualitative research but qualitative researchers still have to submit discussion guides/full survey.

I do acknowledge both points, but my responses would be as follows:

(1) Proprietary software is routinely used to generate designs and (particularly) to analyse results of economic and other models: we’re getting into the nitty-gritty of the likelihood maximisation routine used (EM algorithm/other etc), starting value routines used internally by the stats program, etc. The ultimate black box is the stuff that does everything for the novice/inexperienced DCE researcher, mentioning no names 😉

Now, that doesn’t make things right, but it does mean that unless the researcher has the full code for everything from DCE design to model selection, or can reference it all for reviewers, I don’t think picking on just the DCE model selection issue is fair.

(2) I have no objections to submitting the design of the survey – when I was a reviewer, most fatal errors were made in the design and take the view that no DCE can be properly reviewed without access to the design by reviewers. (Another reason why authors might like to rethink if they are going to use “adaptive conjoint” – are they going to provide the design administered to every respondent? Haha, thought not, and if they do, will reviewers check through such a model, involving programming it in their software. Haha, thought not.) I myself also provide details of the main and secondary analyses I conducted. These can all be reproduced by reviewers, if they want to. The difficulty – and I believe, from my (far more limited, I acknowledge) experience/observation of analysis of qualitative data that it’s the same there – is that value judgments are made: e.g. “have we really reached saturation?” etc. For the reviewer it comes down to “in my experience, do I agree with this?”

And, unfortunately, in my experience in academia, too few peers had sufficient experience – and I mean designing, analysing and interpreting DCEs across multiple fields – to possibly feel comfortable endorsing me when I say “I didn’t use the model dictated by the BIC criterion – or whatever statistical rule you may like – because it routinely gives too many latent classes and I used my experience to choose the best model”. Sorry, yes I sound arrogant, but when any one DCE has literally an infinite number of solutions – a point still ignored or misunderstood by most practitioners – then inevitably experience and gut feelings based on intimate knowledge of your sample, data and survey become paramount.

In short, model selection skills can’t be taught, they must be gained with experience.

And, you are fully entitled to say “well you would say that, you work in industry now”. To which I’d respond, yes, I do have an interest in saying that, but why are academic groups that routinely delay competitor groups’ papers, mis-reference things in order to skew publication metrics and funding likelihood etc not pulled up on their shenanigans? I got a google citation report just today to something – and seeing the authors I would have bet (before reading) 100 GBP with anyone on the planet that the paper of mine that was absolutely crucial to this new publication would not be the citation I got the report for. I would have won the bet, the citation was to something else of mine entirely. I just laugh at these things now, they don’t affect me or my business, but it’s rather sad that they still go on. Particularly in this case when it can contribute to more QALY valuation studies that can’t possibly give the right answer – how is that defensible on equity or efficiency grounds?

So, until basic rules of research – and we’re talking the stuff I was taught in my first PhD supervision like “get the primary source”, not even the more recent transparency stuff – are followed consistently by academics I’m afraid industry is entitled to retort “people in glass houses shouldn’t throw stones”.

no capability not death

Just a quick note following a twitter exchange I had regarding whether capabilities as valued by the ICEPOP team (the ICECAP-O was referenced in the original paper) are “QALY-like”.

Key team members never intended the ICECAP-O scores to be multiplied by life expectancy (in the way, say, an EQ-5D score is). Whilst we have recognised that people would like to do this, technically this is a fudge and comes down to definitions and the maths:

Death necessarily implies no capabilities but no capabilities (the bottom ICECAP-O state) does not imply death. But more fundamentally, the estimated ICECAP scores are interval scaled, NOT ratio-scaled (for reference, read the BWS book): we used a linear transformation to preserve the relative differences between states but the anchoring at zero would not be accepted by a math psych person: they would say defining the bottom to be the zero doesn’t make it so.

Since different individuals technically had different zeros (the BWS or any discrete choice estimates have arbitrary zero) – death – multiplying a technically averaged interval scale score (our published tariff) by a ratio scaled one (life expectancy) to compare across groups/interventions is wrong: If there is heterogeneity in where “death” is on our latent capability scale (which we can’t/didn’t quantify – unlike the traditional QALY models estimated in the proper way) then comparisons across groups that don’t have the same “zero” gives incorrect answers. We can compare “mean losses of capability from full capability” which is why I personally (though I don’t speak for the wider team here) prefer the measure to be used as an alternative measure of deprivation, like the IMD in UK or SEIFA in Australia.