Category Archives: Complete Blog

Please note: None of the ideas expressed in this blog post are shared, supported, or endorsed in any manner by my employer.

He-man isn’t manly enough? LOL

So let me get this straight (excuse the pun). A 1980s cartoon that I watched as a kid and even then spotted was absolutely chock-full of homoeroticism and gay references (which these days are un-pc but in the 1980s were “defining qualities”)  has been remade by Netflix.

A bunch of guys on YouTube have denounced it for making the male characters “weak” and unmanned in another way and promoted female characters who are clearly lesbian?

Sheesh. He-Man and Thundercats clearly had huge gay subtexts and a million memes illustrate this. They were joke cartoons made in an era when writers were subversive in getting whatever they could past the censors. The fact you never “got this” and take the bloody cartoons seriously like this shows either:

  1. You are really really dumb, or
  2. You are so closeted you are in Narnia.

Sheesh. You go on about “SJW bleating” but you don’t see the irony of your bleating? Some people need to get out more.

Guardian health retirement disingenuous

So Health Editor Sarah Boseley at the Guardian is retiring from the position. At least one commenter points out that despite the good reporting she did, her little mea culpa over MMR does not excuse the frankly execrable standards of science reporting at the time, which seemed to think that “both sides deserve equal footage”. That is NOT how science works.

Thanks to search engines, one can easily find examples like this, which Sarah can’t sweep under the carpet. I remember at the time how horrific the coverage was by all media, and especially the Guardian. Some commenters try to defend her by saying that if even the Lancet got taken in, then employing someone with an appropriate scientific qualification rather than humanities would not solve the problem. Errrrr, debunking such nonsense was being done routinely on the web back then by independent sites and scientists who don’t, unfortunately, have the Islington links, and I remember the fury I felt. Ben Goldacre, although not yet syndicated to the Guardian with his Bad Science column, had been running it for 2+ years before Sarah’s frankly dreadful comment piece. I still wonder why the syndcation of his column ended in 2011.

I lived in Sydney 2009-2015 and remember seeing vaccination rates by postcode. I happened to live in the poshest postcode in Australia (renting!) and it, along with adjoining areas of “Guardian-types” had the lowest rates of MMR vaccination. I was horrified. I’m sorry Guardian, yes you did draw attention to some horrific global health issues but until you also take ownership for the standards of publication in medicine that has promoted and excerbated a sense of “we know better” among New Labour/Third Way types who consistently undervalue STEM education and buy your trashy publication that is no better than the Daily Mail when it comes to health, then I will never ever give you credit for such self-serving nonsense that promoted a worldwide rejection of science that future historians may judge rather harshly in terms of lives lost from COVID and vaccine-rejection compared to whatever good you did with AIDS etc.

Do I value you overall? No. So you went to disease zones. Some of us were desperately engaged in attempts to help educate the public so they were aware of real and non-real risks. Some of us have spent decades trying to produce better indicators of well-being that would show just how awful parts of the world are. So you did it via some nice pics, but at the expense of causing millions to have their confidence in science degraded. You are not some “Female outsider” who achieved world changing stuff by being female and not a chain-smoking less-than-alcoholic. Indeed history may well judge your PUBLISHED work as being some of the worst in human history in terms of degrading human knowledge and killing kids via lack of vaccination.

Here is what I want. I want you to hold your hand up and say “Yes, I failed to properly research vaccination. I failed to look to experts like Ben Goldacre who weren’t “in the system” but who were demonstrably cleverer. I failed to understand the basic epistemology of science and that underpinning the statistics of medical/health trials in particular.” This latter is one of the most egregious errors in medicine ever committed. For you to fail to admit your mistake here is truly truly awful. Your article is consistent with someone who shows such a basic lack of self-awareness that I am truly shocked.

I’m British and Mercian – Starmer take note if you’re going to invoke Britishness

 

BREXIT has accelerated debate over whether the UK itself should break up. Scotland may soon get a second referendum. Welsh Nationalism has increased. The New York Times predicted that Northern Ireland will re-unify with Eire within a decade. When I was an actuary the statistics suggested “sometime in the 2040s” given higher Catholic birth rates. However, although detailed census data is kept secret for a century, summary statistics are released soon after the census itself. They’re likely to show that Catholics outnumber Protestants  in Northern Ireland – in 2011 they were already close (45% to 48%).

 

 Is “Unionism” at the UK level something we on the left should fight for?

My (southern) Irish surname might suggest I want rid of Northern Ireland. I actually have family links to both sides of the debate but I hold no strong view except that of self-determination. Yet self-determination, with younger NI protestants being less enamoured with Unionism and more bothered about the basics – getting sausages, milk, a passport that gives them opportunities across the EU – may well lead to Irish re-unification soon, as the NYT suggests.

 

What I’m proposing here can accommodate NI but for simplicity I’ll assume “just Great Britain – England, Scotland and Wales”. Starmer’s “Buy British” is noble but insufficient in the face of English, Welsh and Scottish Nationalism. The left-wing cause is best served if we promote a two-pronged approach which emphasises Britishness but builds on growing regional loyalties – regions which might build upon the 12 (11 if NI is excluded) “counting regions” used in referenda.

 

 Strengthening a “left-wing/progressive” Britain

The Conservatives cannot command 50+% of the vote in a large number of Westminster Parliamentary constituencies. Yet the opposition is too fragmented and loses huge numbers of seats courtesy of First-Past-The-Post (FPTP) voting. What process might cause “all progressives” to unify behind one candidate per constituency to get a Westminster majority whose sole purpose is to replace FPTP with something if not “fair”, then “fairer”?

 

 The Citizens’ Jury

The “Citizens’ Jury/Citizens Parliament” has attracted interest and its ideas are simple:

 

·        30ish random people from the population should vote on “key topics” like “public funding”, “health”, “climate change” etc.

·        They should get to debate the competing issues raised by the facts presented to them after global experts present the facts

·        They should come to some sort of conclusion or compromise.

·        A resulting policy agenda is voted on in a referendum or via an electoral pact putting only one party candidate against the Conservatives in every seat – a “pseudo-referendum”.

 CJs can have huge advantages.

·            Random group

·            They get to listen to experts without the interference of “media outlets” who may have an agenda in misrepresenting things.

·            They should come to a conclusion that the wider public can be confident in, knowing that “people like me have been represented in the CJ”.

 How they might be problematic if not designed well.

·            Choose 30 random Brits. There’s a (surprisingly high) chance you’ll get few women, no gays, nobody BAME – nobody who is quite young or old, etc.

·            You can get a bad randomisation, just due to chance.

 

 How SHOULD a CJ be run?

 ·         LIMITED randomisation – you use quotas and randomise within quotas. Thus if female adults are 55% of adults then 55% of the 30(ish) participants should be randomly selected females. If gays are 5% of the adult population then 5% of the 30(ish) should be randomly selected gays…..etc

·         This way, the final CJ should approximately represent the wider population on all key sociodemographic variables (gender, age, sexuality, ethnicity etc). However, within all “key demographic groups” people were still selected randomly.

 THEN you can present to them. If a key subgroup has a problem, it is clear. Debate will ensue. It cannot be ignored by virtue of “there being nobody from a BAME or only one elderly person…..”

Has a CJ ever showed a change in people’s minds after experts presented?

Yes, actually. It concerns smoking. A CJ was run asking the following question “Since smokers have SELF-INFLICTED injuries (in terms of lung cancer etc) should they be “sent to the back of the queue” when it comes to treatment?

 Pre-CJ a lot of people said “Yes”. After the CJ, which involved experts showing that smoking was often a “logical short-term choice” made in response to systemic problems of poverty and stress, the CJ changed its mind. Smokers were to be treated no differently from anyone else. It is a National Health Service after all.

 So how might a CJ lead to change in the UK?

There are various topics that a “proper” cross-section of Brits might decide need reform.

  • Should first-past-the-post (FPTP) be used as the voting system for our primary chamber (the House of Commons) given that often the “winner” is rejected by 60% of the constituents?
  • Should we have a second chamber that reflects “regional identities”? People are increasingly feeling loyalty to a region. The Northern Independence Party is all over Twitter. For the highlands of Scotland Holyrood is “just as distant as Westminster”. In Wales Welsh is much more a “way of life” in the north than in the south.

So maybe Britain needs to be more like the USA’s Senate – reflecting  distinct areas that don’t have  equal population but need protection to ensure a varied country that preserves regional identities.

 Final thoughts: Aren’t we just promoting ANOTHER layer of government?

No. The remit of the CJ would be:

·          Replace FPTP with a fairer system for the House of Commons;

·          Replace the House of Lords with a Senate. It would have 12+ regions with each elected by proportional representation. Laws can only pass if no region in England, Scotland or Wales vetoes it (so no more “English dominance”). The Senate would replace regional assemblies.

·          A senate of 150 members would have 100 elected, plus 50 automatic members who are experts in fields crucial to the existence of Britain. Thus members of SAGE, the chiefs of various technical societies and other experts are automatic members. Totally democratic? No. But do you want the best plumber or the one who gets the most stars on some stupid website?

 This “power to the regions” – to be delegated via a written constitution that forbade Westminster from “taking the powers back except via votes akin to American Amendments to the Constitution” would be intended to replace, not augment regional assemblies.

 Clearly, this would be presented by the media as a “power grab” intended to weaken Wales and Scotland. Yet if the Senate had veto power over key issues (national finance, environment etc) then never again could England impose its will upon Wales and Scotland (and NI if it sticks around) if even one region in any of the three (four) said no. That is real local power. Plus, unlike the current devolved institutions, if part of a constitution drafted by a CJ is voted through via constitution, Westminster can’t simply take back the powers. I’ll bet people start voting more often.

 I’m British and Mercian. Maybe that’s the kind of thinking everyone should adopt.

 

DCE references

I’ve promised academic references for certain statements in last few blog entries. Here they are, numbered according to numbers in the articles elsewhere:

[1] Specification Error in Probit Models. Adonis Yatchew and Zvi Griliches. The Review of Economics and Statistics: Vol. 67, No. 1 (Feb., 1985), pp. 134-139 (6 pages) – THIS PAPER SHOWS WHY YOU MUST ADJUST FOR DIFFERENT VARIANCES BEFORE AGGREGATING HUMANS ELSE YOU GET BIAS NOT SIMPLY INCONSISTENCY. RELEVANT TO LOGIT OR PROBIT MODELS.

[2] Combining sources of preference data. Journal of Econometrics. David Hensher, Jordan Louviere, Joffre Swait. Journal of Econometrics: Volume 89, Issues 1–2, 26 November 1998, Pages 197-221- THIS PAPER SHOWS THEORETICALLY AND EMPIRICALLY WHY YOU MUST NET OUT VARIANCE DIFFERENCES BETWEEN DATA SOURCES (INCLUDING SUBJECTS) BEFORE AGGREGATING THEM.

[3] Confound it! That Pesky little scale constant messes up our convenient assumptions. Jordan Louviere & Thomas Eagle. USER-ACCESSIBLE EXPLANATION OF VARIANCE ISSUE IF [1] AND [2] UNAVAILABLE.

[4] Best-Worst Scaling: Theory, Methods and Applications. Jordan Louviere, Terry N Flynn, Anthony AJ Marley. Cambridge University Press (2015).

[5] The role of the scale parameter in estimation and comparison of multinomial logit models. Joffre Swait & Jordan Louviere. JMR 30(3): 305-314.

 

Perils of discrete choices in vax and surveys – SPOILERS FOR AVENGERS ENDGAME

NB: Edits done between 16th and 18th June 2021 for clarity and explain certain statistical concepts. Key references as of 18th now included, which link to post here.

Warning – for some potential LULZ I’m going to use the Marvel Cinematic Universe (MCU) as a way to explain variability in outcomes. If you haven’t watched up to and including Avengers Endgame and don’t want spoilers go watch them first!

I find myself having to explain why “this” or “that” study (frequently survey based, but sometimes a clinical study) has results that should be regarded as deeply suspect. I hope to provide some user-friendly insights here into why people like me can be very suspicious of such studies and what you as the average reader should look out for.

To aid exposition, I’ll do the old trick of “giving the punchline first, so those who don’t want the mathematics and logic can skip it”.

In many contexts you get a “one shot” for an individual – “live/die” or “cure/non-cure” or “Democrat/Republican/Libertarian/Green” in a general election etc. You don’t know the variability. Would that person – call her Mrs Smith – have displayed the same result in 14,000,604 other parallel universes which in key respects are the same as ours? MCU fans will instantly recognise this number. Spoiler alert – We’re getting into the territory of sci-fi and the Marvel Cinematic Universe here! When I say “variability” I am SPECIFICALLY referring to the variability ONE person – Mrs Smith – exhibits. Would she ALWAYS do “action p”? Or would she, in different universes, do “action q”?

Unfortunately we observe actions in ONE universe. We don’t in reality OBSERVE the “other” 14,000,604 universes – we are not Doctor Strange. So we don’t know. So scientists assume “what we observe” is, in some sense, the TRUE effect. For the statistical minded who want to say “GOTCHA – we have standard errors and confidence intervals etc to handle uncertainty” I’ll reply, go down and read the asterisked section because we clearly are NOT on the same page here.* (TL;DR – that “variability” says NOTHING about what Mrs Smith’s variation is in different universes but is based on variation ACROSS SUBJECTS who might be LIKE Mrs Smith. The two are potentially VERY VERY different.)

This can lead to spectacularly bad outcomes but scientists defend themselves by saying “we had no choice! You can’t have a do-over like in the MCU and see if the patient dissolves to dust under a different stimulus!” Calculating the “variation intrinsic to patient Smith” is impossible. I’m here to discuss how, if we’re really interested and willing to put resources into it, we CAN find out the variation in many circumstances (though not a Thanos or other “death” event) – DID A “RESPONSIVE” PERSON RESPOND BECAUSE THEY ALWAYS WILL OR DID WE SEE THE ONE OUT OF 14,000,605 OCCASIONS IN WHICH THEY DID? DID THE SET OF STIMULI REQUIRED TO REVERSE THE SNAP (KILLING HALF OF ALL LIVING THINGS) – ONE OUT OF 14,000,605 – REQUIRE IRON MAN TO DIE? (UNFORTUNATELY, YES). AVENGERS ENDGAME IS NOT JUST POPCORN SCI-FI BUT A SUBTLE AND CLEVER EXPLANATION OF DISCRETE CHOICE MODELLING.

To start this discussion we’ve got to go back to stats 101. Heteroscedasticity. You’ll have been shown it in a graph probably. As “x” values increase, so do “y”. However, the RANGE of values for y increases – you see a “funnel” with the tip pointed towards the lower left and it getting wider as you move toward the upper right. If you do a least squares regression (to explain y values from a given x value) you’ll get the right “average” answer. However, your measure of “certainty” or “variability” – the standard error and thus confidence interval – will be wrong. This is what I’ll call a “nuisance”. Your “main answer” (vector of betas, showing the average effect of an explanatory variable on the outcome) will be correct on average, but your level of confidence will not and needs to be adjusted. There are methods to make this adjustment, so all can be well. Continuous outcomes (like GDP, blood pressure, etc) are easily analysed using such methods. “Discrete” (one out of two or more) outcomes (yes/no……die/live……Democrat/Republican/Libertarian/Green) are not. [1,2,3,5] But here’s the important takeaway I’m going to explain – heteroscedasticity in models with continuous outcomes (GPD/Blood pressure etc) is just a “nuisance”. Heteroscedasticity in limited dependent variable models (yes/no……Democrat/Republican…..etc) is a FUNDAMENTAL PROBLEM CAUSING BIAS, not just “standard errors that must be tweaked” – you usually DON’T KNOW THE DIRECTION OF THE BIAS, LET ALONE ITS MAGNITUDE. YOU ARE ROYALLY SCREWED.

 

Discrete Outcomes and Their Problems

Discrete outcomes are generally coded (0,1) or (0,1,2) or (0,1,2,3) etc. These numbers usually have no intrinsic numerical meaning. They could be (a,b) or (a,b,c) or (a,b,c,d). However, the point is, to adequately understand “what is going on underneath” you usually need something that links a discrete outcome to some (albeit hypothesised) latent (unobserved) continuous numerical scale. Let’s consider 1950s America when party allegiance generally was on a single left/right dimension with “Democrat” arbitrarily chosen as the “positive” end of the scale and “Republican” as the negative. Reverse these if you want – it makes NO difference to my argument.

Mrs Smith has an underlying position on the latent “party allegiance scale”. If it is positive, she is more likely to vote Democrat and vice versa. Note she is NOT guaranteed to vote one way or the other. I’m just saying, if she is “strongly positive” then the chances of her switching to the Republican Party are small, but not zero. As her position on the “party allegiance scale” zooms up to positive infinity, the chances of her voting Democrat asymptotically approach one (but NEVER get to one).

There needs to be a “link function” to relate a (hypothesised) position on the latent party allegiance scale with a discrete choice (let’s keep it simple with just Democrat or Republican for now). Different academic disciplines prefer different functions – some prefer those based on the normal (Gaussian) distribution whilst others favour those based on the logistic distribution. In practice it makes practically no difference to the answer. Those of us who like the logistic function do so because it is a “closed form” function when there are 3+ outcomes – in other words there is a mathematical formula that can be “maximised” using an established technique. The multinomial probit doesn’t – you have to “brute force attack” it. Doing brute force means you must make sure your “peak” is not the tallest mountain in the Appalachians but is, in fact, Mount Everest. The logistic “maximisation routine” can make the same mistake but it’s generally easier to spot mistakes and get to Everest first time.

The link function relates a position on the latent party scale to a discrete (Democrat/Republican) outcome. What in fact it is doing here is doing the reverse – OBSERVING Mrs Smith’s voting and (via the link function) INFERRING her position on the latent party allegiance scale.

How do you infer a position on a continuous scale from a (0,1) response (Democrat or Republican) in traditional logit/probit studies?

You typically look at the “other factors” defining Mrs Smith, her circumstances etc and most importantly, draw strength from OTHER people in your sample. See the problem? You are already making inferences on the basis of people who are NOT Mrs Smith. This leads directly to the issue of “variability” and so I’ll put the dreaded asterisk in for you stats bods.* We are inferring information from people “like Mrs Smith”. Just because they’re female, similar age, sociodemographics etc, does NOT mean they are useful in placing Mrs Smith accurately on the latent scale. You really need to see what she’d do in various scenarios, NOT what people “seemingly like her” do.

So, this process of making inferences is incredibly dangerous unless you have seen Mrs Smith’s choice/outcome behaviour under a bunch of different scenarios. If you have merely observed her in the ONLY ONE out of 14,000,605 universes where she voted Democrat then you’ll vastly over-estimate her loyalty to the Democrats. If she in fact votes democrat in all 14+ million universes then you’re on more solid ground. The point is, YOU DON’T KNOW (unless you’re Doctor Strange).

As it happens, Dr Strange looked at 14,000,605 universes. He saw that there was one, and ONLY one “intervention” that could lead to victory (Thanos being defeated and Earth being restored). Unfortunately the “intervention” required Tony Stark to die. He held up one quivering finger at the crucial moment in the movie to show that the “Endgame” required Tony to act. Tony was clever enough to know what this meant. Which is partly why the movie is so family-blogging great. The Russo brothers, directing, knew what they were doing. Tony “did action x” and the one outcome that we “needed” out of a possible 14,000,605 outcomes came to pass. And he died. But Earth and the universe was restored.

Discrete Choice Modelling (Discrete Choice Experiments – DCEs) As a Solution

How do we escape the above conundrum? In effect, we try to simulate some “key universes” from the 14+ million that allow us to better understand “how committed Mrs Smith is to the Democrats”. We do this via a Discrete Choice Experiment or DCE.

How, broadly, do DCEs work? In essence you vary all the “key stimuli” that influence a decision in some pre-defined, statistical way way as to “simulate” “key universes”. Then see what the person does in each one. You can then quantify the effect each stimulus has on that person’s utility function. Note carefully. I say “that person”. The optimal DCE can be done ON A SINGLE PERSON. You map their “demand surface” (multi-dimensional version of the classic “demand curve”) across multiple dimensions so can “plug in” any combination of levels of stimuli and predict if they’d choose a or b, Republican or Democrat…..or whatever your outcome is.

For the stats nerds, you must have non-negative degrees of freedom. In other words, for 8 estimates (“effect of this level of this stimulus is a degree of freedom”) you must have 8+ choices made by Mrs Smith. I’ve done this before. Incidentally because I know the rules well I know when I can break them – I use least squares regression on discrete outcomes (0/1). You are taught NEVER to do this. Indeed, unless you know what you are doing that is good advice. But under certain circumstances it tells you a lot. Not in terms of “anything you’ll quote in published results” but as a “quick and dirty” way of identifying individuals who have extreme preferences and “getting into the tails of the logistic distribution”. I write macros that “find me anyone whose choices are entirely dictated according to whether the hypothesised election involves elimination of incomce tax”….or whatever.

This “cheat” is a good way to test an absolutely key problem with DCEs – that tow people, one old, one young, might have IDENTICAL views, but the younger person seems to “respond more readily” to stimuli. I’ll group a bunch of 30 year olds and see their “beta estimates” are all quite skewed – they are strongly affect (ITS SEEMS) by the stimuli. I group a bunch of 70 year olds and see their “beta estimates” are much smaller. They SEEM to be less affected by the stimuli. But if the “pattern of betas” is the same I know both age groups in fact have the same preferences. …..what is merely happening is that the “error rate” is higher among the 70 year olds – something commonly seen. The average level of age-related cognitive impairment is higher and they “mistakenly” choose the “less preferred” manifesto more often, “diluting” their estimates. However, when the chips are down in a real world situation (like the voting booth) they might be no different from their younger brethren. They concentrate hard and take voting seriously. Suddenly that 60/40 split seen in the SP data among the “oldies” becomes 80/20 (just like the young’uns) in the “real” – Revealed Preference (RP) data.

Important point: in a Stated Preference (SP) study you deliberately “assume away” complicating factors. Frequencies then become very skewed. In a Revealed preference study (RP) – REALITY) – there is all sorts of cr$p that impinges upon your choices – frequencies tend to move towards 1/n where n is the number of response categories. So people often look “less sure” in real life with less skewed choice frequencies but this just reflects the fact life is complicated and people don’t “assume everything else is equal”. Remember this.

How do you Design a DCE in Practice?

Firstly, do qualitative work to find out what stimuli (attributes) matter to respondents (those which cause their choices to change, depending on what “level” the attribute takes).

Then, to design the “scenarios” you typically design a DCE in one of two ways. The first way is broadly Bayesian. You use prior knowledge as to what levels of the attributes “typically influence subjects like Mrs Smith”. You then construct a design – a series of pairs of manifestoes in the 1950s political example, one Democrat, one Republican – that vary these attributes in ways that “quickly and efficiently” establish what issues or groups of issues cause Mrs Smith to change her vote. Think of a bunch of dots in the (x,y) plane of a graph grouped around the best-fit line. You don’t bother to ask about manifestoes that are “far away” from Mrs Smith’s likely solution. This method gives you incredibly accurate estimates. BUT if your priors about Mrs Smith are wrong, you’ll get an answer that is “incredibly accurate but wrong”. It’s all very well to know how many millimetres the highest peak in the Appalachians is but that’s no bloody use since Everest is where you should be measuring!

The second way utilises “orthogonal designs”. These essentially vary the stimuli (attribute levels) so they keep moving at right angles so as to “cover all the utility space”. The downside is that some “pseudo-elections” involve manifestoes with policies that are pretty unrealistic. The upside is that “you’ve covered all possibilities”. You go looking for peaks in Bangladesh. Top tip – that’s dumb. But ultimately, because you are ALSO looking at the Nepalase plateau, you’ll find Everest. IS this better or worse? It depends. If you use Bayesian methods sensibly you can do at least as well as orthogonal designs. But they are a delicate power tool not to be used by the non-expert.

Consider the original iphone. Whilst a phone that utilised some form of interactive screen had been around, the iphone was genuinely revolutionary. Using “priors” based on the Blackberry or Windows phones using a wand would have been seriously dangerous. The iphone “thought outside the box”. That’s when an orthogonal design is best. Sometimes Mrs Smith doesn’t KNOW what she’d like until a hypothetical product is mocked up and shown on the screen. Bayesian methods would “stick to variations on what is available”. Orthogonal designs “imagine stuff” – OK, at the expense of asking dumb questions about Bangladesh mountains…..but whilst I value both approaches, I prefer orthogonal designs. Cause I like to investigate “new stuff”. Sometimes the “peak” is something new you never knew was there.

Reliability of DCE (Stated Preference) Answers

Do people answer DCE scenarios honestly? Some do, some don’t. But the important thing is that experienced researchers like me know how to spot liars, those who have “done their best but have a big dose of uncertainty” and those who “are pretty sure of their answers” (even if their honestly given answers happen to be wrong).

How do liars give themselves away? Well, a DCE is typically operating in 5+ dimensions. The number of attributes (stimuli) defines the number of dimensions. If you lie – either just to be annoying or to get through the study quickly and collect reward points in some online panel – then you must lie in a way that “works across all the dimensions”. Humans generally can’t do this. I can often spot liars a mile off just eyeballing data. If you’re going to fool me you must remember your answers in 5+ dimensions and use some sort of crib sheet to ensure your lies are consistent and your “preferred options” all conform to your “story”.

If you suddenly prefer a cellphone with no memory and elsewhere you have made clear you like a lot of storage to look at porn family pictures then it will raise a red flag in my data checks. Because you’ve HAD to sacrifice something else. And when you make totally mutually contradictory choices I get suspicious. (Quite apart from the fact I’m male and we are no angels and I’ve yet to see a male who likes an electronic device that can’t “store a lot of stuff”).

So if you want to mess with me you’ve got to do it in 5+ dimensions. And I practically guarantee you can’t. Which means I give your ID to the panel owner. They typically won’t take action straightaway. However, when University of X samples you for THEIR study, they see the “flag” stating “this person might take the p!ss”. If they find you’ve clicked through quickly to get points and money quickly by paying no attention to the questions or are p!ssing about you know what happens? The panel company deletes your points ($). Bad luck – you just got booted out for being a d!ckhead.

TOP TIP – IF YOU’RE GOING TO LIE, MAKE SURE YOU CAN DO IT IN 9 DIMENSIONS BECAUSE OTHERWISE WE’LL SPOT YOU. YOU’LL BE BLACKLISTED PRONTO.

So it’s really much more hassle to lie in 9 dimensions than to just answer honestly. So don’t do it. If I’m unsure about you I won’t suggest you be blacklisted but I WILL instruct my analysis program to place you in the group with “higher variance” (i.e. you are “less consistent” and might have a tendency to give mutually contradictory answers). You’ll have a lower contribution to the final solution – I am unsure if you’re just someone who might have an odd demand function I’ve never before encountered, someone cognitively challenged by the task or you are taking the p*ss.

What about “other” stated preference methods? Willingness-to-pay is the one that has a much longer pedigree, but also is much more contentious. People like Yves Smith of NakedCapitalism have expressed unease with WTP estimates. I happen to agree with her – despite the fact I’ve co-authored with one of the world’s top WTP experts, Richard Carson of UCSD whose figures were used in the ExxonValdez settlement. I just find it hard to believe most humans can “think up a valid number to value something”. Sorry Richard. However, I HAVE used WTP on occasion when I’ve had no other choice. It’s just one of those tools I think must be used very very carefully.

Where Next?

DCEs will, going back to our 1950s politics example, present Mrs Smith with a series of hypothetical, but REALISTIC, election scenarios. If you use orthogonal designs then unfortunately SOME scenarios might seem odd…..but with knowledge and experience you can minimise the number of silly scenarios. So, present hypothetical manifestos (one per party). The key policies of each manifesto are the “stimuli” which are varied. Ask her to state her preferred party each time. If, out of 16 pairs, she says “Democrat” every time then this is both good and bad. Good in that we can “segment her off” as a diehard Democrat. Bad in that she violates a KEY assumption of regression models – that she has an “error” term. She is “deterministic” not “probabilistic”. Thus keeping her in ANY regression model leads to bias (at least, any regression model based on limited dependent variable outcomes – our logit and probit models).

In terms of why you should be “netting her out if she says Democrat every time” please read the asterisked bit. A DCE, in ideal situation, should be doable for a SINGLE PERSON. Try running a logit/probit regression in which the dependent variable doesn’t vary. The program (Stata or whatever) will crash or give an error – no variation in dependent variable. Mrs Smith should not be in the regression! The “link function” is not designed to cope with 100% choices.

Why “Non-Experts” Make Horrid Mistakes

You might think adding Mr Patel, who does switch party depending on policy, solves things. I have lost count of the number of studies I refereed who thought this was “a solution”.

The thinking is, “OK 100% + 50%”: average = 75% and that can go into the link function to give an average utility, right? WRONG. The Link function can’t deal with 100% but 75% is fine. However, 100% from Mrs Smith is actually “infinite utility”. Do you know what you’ve actually done? You’ve asked “what’s the average of infinity and (say) ten?” Do you realise what a dumb question that is? IF you think taking averages involving infinity is OK you should transfer out of all mathematical subjects pronto. Go do drama or liberal arts.

You KNOW Mrs Smith votes Democrat no matter what. Use some family blogging common sense – this is what frustrates me so much about people using these models “cause they look cool”. Separate her out.

Results?

Results from a limited dependent variable model (logit/probit) are notoriously hard to interpret. Those who have seen my objections on NakedCapitalism will know what is coming. Here is a hopefully abbreviated version of the explanation.

Here’s the math. We can solve the likelihood function to find what mean and variance are “most likely” to produce the pattern of data we observed. Great. However in limited dependent variable models it hits a wall. Why? The mean and variance (technically a function of the variance) are multiplied by each other. So, let’s say you get the “solution” of “8”. If it is meanxvariance is this 8×1? Or 4×2? Or 2×4? Or 1×8? Or any of an infinite number of combinations.

The underlying problem is that we have ONE equation with TWO unknowns. You need a SECOND equation in order to split the mean and variance.

A Second Equation?

A second set of data can come from many sources. McFadden (who won the “Economics Nobel”) used real data from people’s decisions regarding public transport to do this – it “calibrated his model” (in other words, gave the right starting numbers upon which all the rest resided!). Using  real data – “ Revealed Preferences” – as opposed to the “Stated Preferences” discussed here is very useful.[2,5] RP data is good but is useless when you want to design something fundamentally new or “think outside the box”.

Where Next?

Attitudes have proven to be a salvation.[4] Segment people according to attitudes then the “uncalibrated voting data” you have can suddenly become highly predictive. It did for me at the 2017 General Election! 🙂

However, whenever I see a story with “rates” quoted for groups……be it “views on vaccination” or anything else, I roll my eyes. Why? Because whilst “patterns” (“definitely will vaccinate/probably will vaccinate/not sure/ probably won’t/definitely won’t) are somewhat stable, the initial calibration is often wrong. Thus, the media just yesterday in UK breathlessly pronounced that the proportion of “vaccine hesitant” people who did, in fact, get vaccinated was really high. 20 years of working in the field told me something……..the skewed proportions expressed in stated preference studies are almost NEVER reproduced in real life. I knew full well that the proportion of people refusing the vaccine would NOT BE SKY HIGH AS THE MEDIA SUGGESTED – in fact when “revealed preferences” became available, we’d see that whilst the “pattern” across the (say, 5) response categories might be maintained, the raw rates would stop being so skewed and would be “squashed” towards 20% each. In effect, this “sky high” number of anti-vaxxers would be squashed towards one fifth (all the response category proportions would get closer to 1/n where n is the number of response categories). It might even be the case that the pattern was changed and the “no, never” bunch would go below 20%. THAT is why you must “calibrate” stated preference data.

Dr Strange knew there was only one combination of stimuli that would lead to Thanos being destroyed in such a way that the “snap” was reversed. Unfortunately that combination required Tony Stark to die too. Sometimes the “hugely unlikely” is what is observed. It all depends on what is being changed around us.

TL;DR: Next time you see a study saying the “proportion of people who will engage in a behaviour x is some outlandishly highor low y%” you should immediately be wary.

*There is an absolutely CRUCIAL point to be made here regarding DCEs and which will head off (I hope) the criticism that I know a bunch of readers with statistical knowledge are just itching to say. The “standard errors” and hence confidence intervals around “utility estimates” or “party loyalty estimates” or whatever you are modelling in a logit/probit model, are calculated by looking at between-subject variation. DCEs are, at their core, NOT about between-subject variation. Their default assumption is that Mrs Smith’s “degree of certainty” is NOT the same as Mr Patel’s. In a traditional regression with a continuous outcome, all these different “types of variation” tend to come out in the wash and you’ll get an answer that might be imprecise…..but it’s unbiased.

In a limited dependent variable model (logit/probit), this heteroscedasticity (differences in “sureness”/”certainty”/”level of understanding”/”engagement with the task” between people) causes BIAS (not just a nuisance but an unaddressable problem). This is why in traditional logit models (probit too but logit are more common) like a clinical study with ONE endpoint per patient (cure/not……live/die…..) you can’t measure within-subject variation because you YOU ONLY HAVE ONE DATAPOINT! The “quantification of variability” is ENTIRELY BETWEEN-SUBJECT variation. Mathematical psychologists have shown for half a century that humans are inconsistent in many many areas. A well-designed DCE can measure this. I’ve done it, finding exactly what my predecessors found – lower levels of education, advanced age, etc, all cause you to be MORE INCONSISTENT (higher variance, manifesting as small beta estimates and choice frequencies closer to 1/n). Until you “net this out” you can’t aggregate Mrs Smith and Mr Patel’s data. THIS IS WHY PSEPHOLOGISTS GET ELECTION PREDICTIONS WRONG SO OFTEN.

 

STAY TUNED FOR PART 2: WHY INTERPRETATION OF DISCRETE DATA IS FREQUENTLY SO VERY VERY WRONG

Most_least Voting(2)

Most-Least Voting – Questions raised – some of which were serious, some I suspect were “rabble-rousing”. I’ve edited to reduce snark and generally tried to give benefit of the doubt, even though I know some people really should just go out more……

Arrow’s Theorem only applies to generic voting. Fair results can be obtained if particulars are taken into account. When you only have a few candidates MLV is not what you’d go for. With a huge pool of eligible candidates, say 1000, all available for say 9 seats, then Cumulative vote tallying is ideal.

Reference please.

“Also something polsci experts often fail to consider is degree of polarization. You don’t have to have just “like” vs “dislike”, you can have a Likert scale on degree of like/dislike, and use it to weight the votes, so that a polarizing candidate who is less polarizing than the other still has a chance to be ahead of the milquetoast centrist. I know, I know, requires fairly sophisticated voters, but worth a shot some time in experimental research trials.”

Likert scaling assumes distances between each choice (answer option) are equal. Please provide references from the mathematical psychology literature showing this to be true. (I’ll save you time – there are none. My co-author was editor of the top journal –JMP – for almost 40 years and never encountered a study showing this. He is AAJ Marley.).  I could quote you amusing anecdotes like the fact traditional Chinese older people associate the character for number 4 with death so avoid it. Statisticians then spend yonks trying to work out if dips at number 4 are “real” or “due to cultural stuff”.  Please stop throwing up new terms like “likert” when it is merely expressing a phenomenon I discredited in my postings before.

San Francisco city government, supervisors, sheriff and district attorney are chosen by ranked choice voting. That, combined with district elections for supervisors, has resulted in a parade of ineffectual, sometimes dangerous, political mediocrities, a chaotic disaster, controlled by the Democratic County Central Committee. If a voter fails to choose three candidates, their vote is thrown out.

You say ranked choice choice voting – I’m not defending that – so your point is?

Some supervisors have been elected with less than 25% of the vote.

Choose from Hillary, Trump and any run of the mill US politician in the centre. Why does LESS THAN 25% “MEAN THEY ARE ILLEGITIMATE”?  – “Top” candidatees don’t matter under MLV if they also disgust a huge number of the rest of the population. This is NOT ranked voting (which YOU talk about). Please actually address my discussed voting system and don’t straw man.

It’s horses for course to get around Arrow. In other words, you select the most appropriate voting system for the size of the candidate pool and the seats being vied for.

I said your latter statement at the start. Why are you presenting this as a “new insight”? Arrow always said you make your moral judgments, based on “values” and the “system”, THEN you can choose the system that best achieves these. As to “get around Arrow”. Nope.

While it is an interesting fad, there is no real guarantee that rigging elections to favor centrists will get you better government. As it happens, I am a Libertarian. Some of my ill-advised fellow party members argue vociferously for ranked choice voting or the like. I attempt to point out to them that RCV tends to guarantee that my party will never win elections, but the RCV faithful will not listen.

Where did I say that MLV rigs elections in favour of centrists? I merely quoted an observation from the Dutch/Belgian researchers that centrists probably stand a better chance of being elected. If you have data showing that MLV disproportionately benefits centrists at the expense of others please quote it – PARTICULARLY in a multidimensional format (which even the continental Euroepan authors do not). Note I also said that in a MULTI-DIMENSIONAL world, the concept of a “centrist” is less meaningful. MLV could get you your libertarianism (in getting govt out of the bedroom). Please stop putting words into my mouth.

There’s a lot of talk about candidates and parties, but not a lot of talk about policy.

One way to create significant momentum to deal with global climate change is to place high taxes onto fossil fuels. As Illinois recently demonstrated, this is highly unpopular.

In either Ranked Choice or Most-Least systems, how do necessary but unpopular policies get enacted?

I’m not going to claim miracles. Just as under ANY other voting scheme, there must be a critical mass of people who “see the peril” and vote accordingly. MLV at least allows these people to “veto” candidates who totally dismiss the environmental issues. So it isn’t “the solution” but it may be “ a quicker solution.” One big benefit of MLV is that it is probably the system that gives the greatest “veto power” to any majority of the population whose candidate(s) didn’t make it into government. So in the UK, the strong environmental lobby crossing all the “progressive parties” who keep losing elections could start exercising real power via their “least” votes.

Links for MLV

Nakedcapitalism.com very kindly reposted my last entry.

It was somewhat lacking in links.

I intend, when I’m feeling up to it, to put all the links in a posting. Stay tuned, they WILL appear.

Thanks.

Ranked choice and most-least voting

I recently realised that two systems proposed as “PR-lite” or “a step towards full PR” can produce radically different outcomes IN REALITY and not just as a THEORETICAL CURIOSITY. The two are “single candidate ranked choice” and “most-least voting – MLV”, most notably when there are just three candidates.

Here’s the deal. Under ranked choice you must rank all three candidates, 1, 2 & 3 (most preferred to least preferred). Under most-least voting you indicate only the “most preferred” (rank 1) and least preferred (with 3 candidates, rank 3). The OBSERVED set of data should be the same. (I’m not going to get into the issue of why they might not – that gets into complex mathematics and I’ll do it another time).

For those who don’t want to get bogged down in the following discussion of the maths, here’s why the two systems can, given EXACTLY the same observed count data, give a different “winning candidate”. Ranked voting essentially tries to identify the (first or second best) candidate that the people-supporting-the-losing-3rd-party-candidate are “most happy with”. Under MLV, if both “first” and “second” preference candidates are diametrically opposite (and mutually hated) then NEITHER should necessarily be elected. The candidate who came a (very very) distant third can be elected if (s)he is NOT HATED by anyone. Essentially, if you polarise the electorate you are penalised. A “centrist” who hasn’t either “enthused” or “repelled” anyone will win under MLV.

I tended to think this was a “theoretical curiousity”. However, upon looking more closely at the 2016 Iowa Democratic Presidential primary I realised this ACTUALLY HAPPENED. Bernie Sanders and Hillary Clinton were essentially tied on about 49.5% each in terms of their “primary first preference vote”. Hillary had the edge, and the 3rd candidate, O’Malley dropped out (but too late so he got votes). Yet he was actually the key influencer, if either ranked voting or MLV had been used. Under ranked choice, either Hillary or Bernie would have won (determined by who the majority of O’Malley’s supporters put as second preference). Under MLV, and assuming that the “much talked about antipathy between Bernie and Hillary was real” then each candidate’s supporters would have put the other as “least preferred”. The “most-minus-least” counts would have been slightly negative for one and likely both candidates. O’Malley, on the other hand, would have obtained a small positive net most-minus-least vote (getting 1 to 2% of the vote, with few/no people putting him as “least preferred”). MLV simply subtracts the “least preferred” total from the “most preferred” total for each candidate giving a “net support rating”.

Under ranked choice voting either Hillary or Bernie would have won. Under MLV both would have been denied the win in favour of O’Malley, because he “pissed nobody off”.

Here’s the more detailed discussion.

Most-Least Voting (MLV) is a special case of a more general method of “stated preferences” called Best-Worst Scaling (BWS). Declaration of interest: I am a co-author on the definitive CUP textbook on BWS, was involved (along with its inventor) in much of the theoretical development and application in various fields (most notably health). HOWEVER I have had no involvement with the theory, parameterisation or application of MLV. Indeed, once I became aware of this method of voting, on checking the bibliography, it became clear that the authors were not actually aware of BWS and due to the “silo effect” in academia, had come up with it largely independently of what we had already done. Incidentally some of the Baltic States have used or do use MLV in certain instances so it isn’t just a “theoretical curiosity”.

OK, having got that out the way, what do I think of MLV? In short, I think it is worthy of serious consideration and wish we’d thought of it first! Like ranked choice voting with single member constituencies (something in use or proposed in various Anglo-Saxon countries like Australia, the UK and USA), it is not “proper” Proportional Representation (PR). However, it can be considered either as a nice compromise, or as a stepping stone to “full PR”. In terms of its similarities to ranked choice voting: suppose there are 5 candidates in your constituency. Under ranked choice, for the maths to not be horribly skewed and potentially very very gameable, you should be forced to rank all five, 1,2,3,4,5. The problem, known since the mid 1960s, is that people are good at “top” and “bottom” ranks but get very “random” and arbitrary “in the middle”. MLV exploits this. It only asks for top and bottom. Thus it may be considered to be the “minimum change to first-past-the-post – FPTP – possible” so as to “make things easy for people”. You only provide ONE extra piece of information – the candidate/Party you like least. If you do not provide both a MOST and a LEAST choice then your ballot is spoilt. This is IMPERATIVE for the maths to work, and for the system to be demonstrably “equitable”. (Most-minus-least vote totals must sum to zero.)

The common question is “Suppose there are only three candidates – aren’t ranked choice and MLV the same?” NO. See above for a real life example. Ranked choice MIGHT be unconstitutional in certain countries (if the mathematicians and lawyers got together because not everyone has the “same influence” mathematically).

So what is happening in practice?  The authors conclude that if the “FPTP winning” candidate espouses (say) a very extreme policy on (say) immigration or something, that all other parties abhor, then (s)he is likely to lose. All other parties “gang up” and place that candidate as “least”. Most-minus-least vote tally is net (highly?) negative. A more “moderate” candidate likely wins. Indeed, the authors claim that “centrists” likely prevail a lot of the time – though they might be an “O’Malley with 1% primary vote”. Though if a candidate would get a MAJORITY (and not just a PLURALITY) under FPTP, they’ll still win under MLV. So “majority” (non-coalition) governments still can happen – they’re just harder to achieve and “third parties” (etc) much more easily get a foothold. I happen to think that this “centrists rule” conclusion is a little simplistic when you move from a single dimension (left/right) to multidimensional space. Yes, maybe you get a candidate closest to the centroid across all dimensions but “how strongly people regard each dimension” can affect results. So, as they frustratingly say in academic papers, it’s “an empirical issue” as to what will happen. However, I will venture a conclusion that “extremists” will naturally get weeded out. Whilst some extremists might be generally considered bad (consider dictators who were first voted in via pluralities in 1930s Europe), others (painted as “extremists” by the MSM like a Sanders today or an Attlee or FDR of yesteryear) could be considered necessary and without them society would be much worse off. It gets necessarily subjective here…!

TL;DR: Arrow’s Impossibility Theorem still holds. MLV doesn’t solve all problems but it is attractive in addressing a lot of the most commonly made criticisms of voting systems used in the UK and USA. However, it isn’t the ONLY system that can address these criticisms – it is merely the “simplest” in terms of practicality and requiring “minimum extra effort by voters beyond what they do now”. Whether you “like it” depends on your “values”.

 

The 2021 Notts Labour Collapse – Both Simple and Complex.

I live in a suburb of Nottingham, UK. It’s almost smack bang in the centre of England (though this means it’s somewhat in the bottom third of the UK as a whole by latitude). It’s one of those confusing cities that can be large or small depending on definition.

•    The “Unitary Authority” (“City” political entity in charge of everything) is defined by the traditional “City of Nottingham” with a population of barely one third of a million.

•    Like so many cities, “Urban Nottingham” (including sprawling suburbs covered administratively by the County of Nottinghamshire, NOT the City) but which in practice is “Nottingham” by way of integrated health-care, buses, and various other services, is much larger – around three-quarters of a million strong.

•    If you go up to “Metro Nottingham” (quoted on Wikipedia but which might – it’s not clear – include the “neighbouring city of Derby” which might be regarded as “commuter belt” but which would strongly contest membership of Nottm!) then we’re talking 1.5 million.

Why give this long intro? Because on “Super Thursday” recently, all the bits outside “City of Nottingham” elected councillors for Nottinghamshire. Notts had traditionally been a part of the Labour “Red Wall”. It began to crumble 4 years ago and collapsed entirely this year. The media “analysis” was largely simplistic about the collapse of Labour. I’m not arguing they are wrong. They are right. But for the wrong reasons.

I’ve had a chance to delve deeper into the Notts data. I think I see what went on, and the Tories must be (reluctantly) admired. Why? For their electoral guile in Nottinghamshire and likely elsewhere in both encouraging “local” parties to peel off economically left, socially conservative Labour voters, so incumbent Labour councillors lost, but then strangling such parties when their “brexit fueled desire for more local power” came to be a threat to the Conservative Party’s centralising nature.

The “story” of Nottinghamshire is both simple and complex. The Tories had 31 seats and had previously led a coalition. They needed 34 to govern alone (66 seats in Nottinghamshire). Their previous coalition partner was a regional party (Mansfield Independents with 4 seats). Ashfield is the district that borders Mansfield (and there are 7 districts in Notts – not necessarily equal sized in population so think of US States but with some “double candidate divisions” to try to somewhat offset this). Ashfield has 10 seats, 5 of which had been held by the Ashfield Independents plus a 6th by a sister party which then merged this election, effectively making the Ashfield Independents hold 6 of 10 divisions. For those puzzled by terminology, divisions is an old term used for county subdivisions. It’s akin to wards but not necessarily the same as a ward. Of the other 4 Ashfield divisions, 3 were Tory, 1 Labour.

The Tories clearly knew ALL FOUR WERE GOING TO BE LOST (which they duly were – the Independents now hold all 10 seats). Why? The Tories “moved” one of their councillors (who also happens to be the newly elected, as of 2019, Member of Parliament for Mansfield) to contest a seat in neighbouring Mansfield for this election. He had been one of the three Tory Ashfield Councillors – perhaps the key one.

On the one hand, fair enough for the guy to move into a district more obviously contained within his Westminster Parliamentary constituency. On the other hand, his council seat was a stone’s throw away and “moving” ahead of a landslide that nobody in the media predicted certainly raises questions….such as “How did you know that was going to happen?”

So the Tories knew they were in danger of moving backwards – they in fact lost 5 seats (mostly to independents) so now were 3+5=8 seats short of a bare majority of 34. Getting 8 seats from Labour (which was exactly what they got) would do it, but nobody likes the bare minimum. So who did they go after to get some additional seats? Their own coalition partners, the Mansfield Independents. All 4 of their divisions fell (though one division to another, differently affiliated independent), 3 to the Tories (including the aforementioned MP). Voila. 3 division majority of 37, what you see quoted on the news.

It is all presented as a “total Labour failure”, which, ultimately, it is. HOWEVER, the right wing, over the past few years before and since the Brexit Referendum, have:

•    encouraged people in deprived “Old Labour” districts like Mansfield and Ashfield who felt utterly let down by New Labour to follow UKIP etc,
•    then after “Brexit was delivered”, regional “parties for local people”.
•    The Tories even then went into coalition with such a “local party”.
•    Then when it suited them, they ate them. The now zero-seat Mansfield Independents Party should have remembered what happened to the Liberal Democrats when they went into coalition with the Tories.

The “short version” lesson:
•    The Conservatives in a key “former red wall county” actually LOST quite heavily to a new party of “older Lefties” who were not just “anti-Brussels” but “anti-Westminster”. Such people were left-wing economically and small-c conservative socially (and very “LEAVE” supporting).
•    However, the Tories clawed back losses there by
(1) exterminating their coalition partner – another similar “local party based in Old Labour area” – turnout DOUBLED in those seats compared to rest of Notts – 60% in a local election in Mansfield? That’s incredible, as in “unbelievable”, and
(2) grabbing Labour seats that were also “Old Labourish” but had “Starmer/New Labour” candidates.

The lesson? Whilst the 4 Labour Arnold candidates beat the trend by being visible in DOING things for their constituents, elsewhere Labour got hammered. Not always from a “direct blow” from the Tories, but otherwise from the inevitable conclusion of a long process that started with the Blair decision to leave the left behind.