Category Archives: DCE support

Myers-Briggs

Oh dear. Here we go again. What personality are you? How the Myers-Briggs test took over the world.

It gets boring shooting down M-B. It’s like shooting fish in a barrel. After all, when M-B compares unfavourably even to a questionnaire (see link to Quartz article about the Big-Five) that states:

Rather than giving an absolute score in each of the Big Five categories, they tell you your percentile in comparison to others within your gender

you know you’re in deep doo-doo. You’re making interpersonal comparisons. Contrary to the Guardian’s quoted criticisms of M-B as unrealistic binary choices, that is NOT its problem. Discrete choices are EXACTLY what you should be getting people to do. It is how you INTEPRET and ANALYSE them that matters. Some tips on judging these types of instrument:

  • Ensure they are based on a sound theoretical model. Schwartz’s List of Values is good because the types appear as segments of a kind of pie chart. Diametrically opposite types are on opposite sides of the circle whilst more similar ones are closer together.
  • If you can’t run a regression to give complete results for ONE person – without drawing on ANY information from ANY other person – then it’s bad.
  • The corollary to the above point is that you must statistically have positive degrees of freedom: more independent datapoints than parameters being estimated. Which means repeated choices. Which leads to:
  • You must get insights into an individual’s consistency (variance). Only in certain controversial areas of life do humans typically exhibit perfect consistency. Generally, kids, older people, people with lower levels of education and/or literacy display higher variances.

The kind of questions these questionnaires should be addressing are ones like “Of the multi-dimensional universes of “types”, which type or mixture of types best describes me, when I’ve been asked to do as many comparisons as possible?”

Even then, even if you get a proper statistical design (e.g. an orthogonal design), then two people might look very different in terms of their observed frequencies in agreeing with each statement. Person A has frequencies (estimated probabilities) that are all fairly squashed toward the size of the choice set: so if you’ve presented pairs, they’ll all be close to 0.5. Person B might have frequencies that all close to one and zero. If the PATTERN is the same, though, person A and person B are likely the same type of person. It’s just that for some reason person B was more consistent (lower variance) in answering.

I never worked on personality questionnaires but I did discuss issues with Geoff Soutar and Julie Lee when they came to work with Louviere many times during my 6 years in Sydney. So I know this stream of work quite well. Schwartz himself decided to “throw away” his old scoring system for the LoV – which necessarily spent many pages trying to net out person-specific heuristics – in favour of Best-Worst Scaling. BWS avoid getting people to use numbers. It uses the most natural way to make a choice, one from a few.

As a final note, this brings me back to a comment I’ve seen on NC by someone who was genuinely trying to be helpful in understanding the logit and probit models. Unfortunately the link was to a Stata working paper I’ve deliberately steered clear of because it all goes wrong in the final two pages.

Those “tricks” to understand means and variances? Dig out your logit/probit data for ONE individual. Can you run them? Unless you’ve been doing a well-designed discrete choice experiment you’re about to ask me “are you out of your mind? Everyone knows you get just a one or a zero for a person”. That, dear reader, is why the writer has not properly thought through this guide.

Predicted probabilities, BIC, etc are, in fact, all still potentially wrong because the likelihood function based on logt/probit models fixes the variance. So even following all the rules you can misinterpret the mean-variance split. You need external information. Which is why the “sterilising/non-sterilising vaccine” info regarding Sars-Cov2 is so crucial. I now can definitively rule out the “means model” – which is exactly what the conventional logit/probit models assume. So their results are wrong by design.

 

 

Revelation re Covid

Occasionally you are putting your thoughts into words and realise you finally “get” something. That happened today when explaining why I was suspicious of two papers “explaining” Sars-Cov2 (aka Covid-19) that were linked to by NakedCapitalism.com. NB NC were not “endorsing” these studies, merely putting them out there for discussion and critique. I duly did and had a revelation.

I know whether SARS-COV-2 has primarily mean or variance effects. It is mostly about variances. Which is the nightmare scenario. How did I come to this revelation? Well, as usual, it was by absorbing the wise words and  experience of those who are “at the front line”.

Here is the deal.

  • We know none of the vaccines for SARS-COV-2 are sterilising.
  • Thus you “catch” it more than once.
  • We know from breakthrough cases and rapid emergence of variants (that respond at differential rates to existing vaccines) that people don’t follow a binary model [0,1] – be protected through chance/vaccine or get Covid. They can get it 2+ times.
  • Thus we have a logit/probit model with variances – when it comes to a “latent scale of susceptibility to infection” people do not have a “mountain” that is shifted following a bout or a vaccine. The vaccine just flattens the mountain into a gentle hill. Less likely to get horrifically ill but high variance – they can get it multiple times.
  • The papers referred to, as do all the papers I’ve read so far, assume the vaccine effects are ENTIRELY BASED IN MEANS.
  • This is conceptually incompatible with what we know from the vaccines and what their manufacturers state (albeit in small print sometimes) – the vaccines are non-sterilising. They reduce symptom severity but don’t stop you getting SARS-Cov-2 again.
  • THUS A MODEL ASSUMING THE ODDS/RISK RATIOS ARE HEAVILY INFLUENCED BY VARIANCES NOT MEANS IS THE ONLY VAGUELY VALID ONE. MEAN-BASED ONES ARE AUTOMATICALLY WRONG. THEIR ESTIMATES ARE BIASED.
  • YET ALL THE PAPERS ARE ASSUMING MEANS, i.e. STERILISING VACCINES. WTF?

 

So what will be the final outcome? Basically ANY piece that doesn’t attempt (even in a rudimentary way) to separate, or at least comment on, the mean-variance confound and note that the evidence favours variances is not going to be read by me. It goes into the same class as “papers that try to explain flights via flat earth paradigms”. Garbage. Nice to finally have a good rule that enables me to implement a policy I’ve rarely had enough “concrete data” to support. However, the data and interpretation from the good people at places like NC have “solved” the mean-variance confound for me.

Any paper that quotes risk/odds ratios without discussing variances is trash. I’m not reading or commenting on it. Maybe I’ll print it out for use in the next toilet paper shortage? Full stop.

Covid postscript

Just a few comments to clarify complex ideas regarding “variances” in limited dependent variable models.

When I say “mountains are flattened”, I mean disease severity is reduced among that subgroup who were previously “hit hard”, but the burden of disease is spread more evenly across everyone. So in ten parallel universes, instead of the same 10% of people ALWAYS getting very ill, 90% of people will get somewhat ill. The particular 90% varies in each universe. You personally are no longer “assured” of getting (or not getting) covid. The variance goes up across the population. Though down for a lot of individuals who previously would have had the same outcome 10 out of 10 times.

Identifying people with zero variance is important. These people are deterministic, not probabilistic. They can’t be in a biostats (logit/probit) model. You just described them qualitatively according to what determines their disease status. Don’t attempt to “include them thinking that they just boost sample sizes and improve precision”. It’s like saying “I’m going to take an average of a bunch of numbers that includes infinity”. Dumb Dumb Dumb. These people, annoying though they are for logit/probit models, are actually useful in policy if you can find WHY they do/don’t get ill.

The KEY factor here is variance heterogeneity. If one group of people, in 100 parallel universes, experience 40 cases, but the SAME 40 people in all 100 universes, then they CANNOT be aggregated with another group of people who, in 100 parallel universes, experience on average 40 cases, but the 40 cases varies immensely across the 100 universes. Any universe from the first hundred has “consistency”. And universe from the second hundred has extreme inconsistency. Aggregating them can’t be done. It doesn’t, UNLIKE A LINEAR MODEL, just mess with standard errors. It causes BIAS.

 

COVID-19 variants. Statistical concerns

This piece draws heavily upon a piece published at NakedCapitalism. Pretty much all the references regarding epidemiological explanations and “on the ground” observations are there so in the interests of brevity (and my own schedule at the moment) I’ll simply give that as the main reference. I’ll put a few notes in regarding other issues though.

*****

I’ve written before that stated preference (SP) data using logit/probit models – examples of limited dependent variable models, so-called because the outcome isn’t continuous like GDP or blood pressure – are very hard to interpret [a]. Technically they have an infinite number of solutions. It is incumbent upon the researcher either to collect a second dataset, totally independent in nature of the first (so we now have two equations to solve for the two unknowns – mean and variance) or use experience and common sense to give us the most likely explanation (or a small number of likely ones). This is technically true of revealed preference data (actual observed decisions) too [b] and Covid-19 might be an unfolding horrific example of where we are pursuing the “wrong” interpretation of the observed outcomes.

Background: What’s happened in various “high vaccination” countries so far?

In short, rates of Covid-19 initially dropped through the floor, typically in line with vaccination coverage, then started bouncing back.  However, the large correlation with hospitalisation and death did not re-appear. This is consistent with the fact the vaccines are not “sterilising vaccines” – you can still catch Covid-19, it’s just that the vaccine is (largely) stopping the infection from playing havoc with your body.

Sounds like a step forward? Actually, without widespread adjunct interventions (good mask usage etc) to stop the spread in the first place, this is potentially very very bad. We’ve already seen variants arise. The Delta variant is causing increasing havoc, whilst Lambda is becoming dominant in South America. The Pfizer vaccine – which thanks to media failures was often touted as “the bestest evah” – seems particularly ill-equipped to deal with Delta. NC is covering this very well.

The bio-chemists and colleagues can give good explanations of WHAT is happening pharmacologically and epidemiologically in producing these variants. Our archetypal drunk lost his keys on the way back from the pub. However, just like the story, he’s looking for them only under the lamp-post, whilst they’re actually on the dark part of the road; if you can’t or won’t look in the right place of course you won’t find the solution. This is what many experts are doing and why Delta etc could keep happening and at an increasing pace and perhaps is the real story: one with roots in statistics.

What’s the possible statistical issue here?

Consider how medical statisticians (amongst others) typically think about discrete (infected/non-infected, or live/die) outcomes. As in the SP case the [0,1] outcome is incapable of giving you a separate mean – “average number of times a human – or particular subgroup – would get bad Covid-19 for a given level of exposure” – and variance “consistency of getting it it for this given level of exposure”. If 80% of Covid sufferers at a given exposure level needed hospital care but only 20% do when vaccinated, then analysts tend to think that the average number of people has gone down.

Suppose the “extreme opposite interpretation” (equally consistent with the observed data) is true? Suppose it’s a variance effect? So, the vaccine is not really – on average – bringing the theoretical average hospitalisation rate down. Or not by much anyway. It is simply “pushing what was a high peaked thin mountain into a fat, low altitude hill” in the vaccination function relating underlying Covid-19 status with observable key outcomes. Far more people are in the tails, with an emphasis on the “hey, now Covid is no big deal for me” end [c]. The odds of hospitalisation following vaccination goes way down. However, if you look at subgroups, you’ll (if you’re experienced) be spotting a tell-tale giveaway: the pattern of odds ratios across subgroups by vaccination status is VERY SIMILAR TO BEFORE, they have all just (for instance) halved. This is a trick I’ve used in SP data for decades and more often shows that some intervention has a variance effect. Fewer people are going to hospital if vaccinated but their average tendency to get a bad bout is actually unchanged by vaccination (particularly if we add the confounding factor of TIME – Covid is changing FAST).

This provides an ideal opportunity for the virus to quietly mutate, spread and via natural selection, find a variant that is more virulent but which, when coupled with fewer people taking precautions, gives a greater tendency for a variant to emerge that is both “longer incubating” but then potentially “suddenly more lethal”.

So vaccines were a bad thing?

At this point in time I’ll say “NO” [d]. However, in conjunction with bad human behaviour and an inability to think through the statistics, they have led to a complacency that might lead to worse long-term outcomes. The moral of the story is one that sites like NC have been emphasising since the start and which certain official medical and statistical authorities really dropped the ball on right from the get-go.

The vaccines merely bought us time. Time we wasted. Now a long-ignored problem with the logit (or probit) function, being the key tool we use to plug “discrete cases of disease” into a “function relating underlying Covid-19 to observed disease status” might be our undoing. Far fewer people are going to hospital following vaccination (smaller mean effect in terms of lethality) but a MUCH larger number of people have become juicy petri dishes for the virus to play in (larger variance). We have concentrated way too much on the former. The statistics textbooks tend to stress that explanation.

Trouble is, too few people read the small print at the bottom warning them that their logit/probit estimates could just as easily arise from variances, not means. Assume you observe:

  • an answer of 8 in non-vaccinated group. You assume mean prevalence=8 and (inverse of) variance=1, as the stats program always does: 8*1=8.
  • In vaccinated group you see answer of 4. Wow, the mean (prevalence due to vaccination effect) has halved prevalence because you ALWAYS ASSUME THE VARIANCE IS 1. So you “must” have got 4 via 4*1 because that is what you must do to get 4!

Oops. How you SHOULD have “divied up the mean and inverse of variance” was 8*1 in non-vaccinated group and 8*(1/2) in vaccinated group. You have a treatment effect that is in fact unchanged. The inverse of the variance halved – in other words the variance doubled. People less consistently got ill [e]

For someone like me who used to deal primarily with stated preference data the worst thing that could happen was that I’d lose the client when model-based predictions went wrong (because I’d made the wrong “split” between means and variances).

The stakes here are much much bigger. This piece is the “statistical issue” – a potential big misinterpretation of Covid-19 data – which really worries people like me.

*************

[a] See my blog and NC reprinted one of my posts.

[b] This is how and why Daniel McFadden won the so-called Economics Nobel – he predicted the demand for the BART in California extraordinarily accurately, before it was even built. He had both stated and revealed preference data on transport usage.

[c] You can’t keep symmetry if you keep squashing the mountain down. The right tail hits 100%. So you “see” a lot more people in the left tail (doing “well”) as a result of vaccination. This leads to the mean effect – so vaccination is unlikely to be 100% variance related. There must be a certain degree of mean effect here. My point is that the “real mean effect of vaccination” is theoretically a lot less than we observe from the data.

[d] With the “Keynes get out-clause”.

[e] The  actual logit and probit functions basically spit out a vector of “beta hats” but which are actually “true betas MULTIPLIED by AN INVERSE function of the variance on the latent scale”. So when variances go up – which in SP data happens when you get answers from people with lower literacy etc – then the “beta hats” (and hence odds ratios) all DECREASE in absolute magnitude. In other words, confusingly for non-stats people, we (to make the equation look less intimidating) tend to define a function of the variance (lambda – not to be confused with the Covid one) or mu that is MULTIPLICATIVE with the “true beta”. Believe me if you think this was a stupid simplification that will lead to confusion as people talk at cross-purposes you are not alone.

RCV vs MLV redux

Every time I think I’ve shown that ranked-choice voting is a step forward but no panacea, it appears again. Latest reply to a NakedCapitalism post below:

 

Ranked choice voting *sigh*. Don’t get me wrong: in choice between RCV vs FPTP (the status quo in most of USA and UK) and I go for RCV in a heartbeat, particularly since I have dual UK-Australian citizenship. However, there are people playing with fire here using arguments they don’t understand and if you’re interested in anecdotal evidence, those of us who spent decades eliciting public preferences in Australia came to despise the Conversation for its terrible editing etc which allows statements like, in the current article:

 

“Some critics incorrectly claim that ranked choice voting lets voters cast more than one ballot per person, when in fact each voter gets just one vote.”

 

 True but highly misleading and proponents will be in REAL trouble when the FPTP PMC class find the right “sound-bite” to counter this. Here’s one possibility: “RCV lets voters cast one vote but some votes are worth are a lot more than others”. My colleague Tony Marley couldn’t prove this was wrong in his PhD back in the 1960s and freaked. He ended up making a much milder statement. RCV (based on the rank ordered logit model) does NOT give everyone equal weight in the (log)likelihood function and this can matter hugely.

 

 South-West Norfolk is a UK Westminster Parliamentary constituency in which RCV would make ABSOLUTELY NO DIFFERENCE because the Conservatives practically always win it with way more than 50% of the primary vote – there would BE NO “second round” etc. The fact that at the last election a candidate for the Monster Raving Looney Party stood – something that only ever used to happen in the constituency of the sitting Prime Minister as a publicity stunt – shows that anger in the general population is spreading.

 

 Statements above in this thread have been that RCV can lead to “communism” or “mediocrities”. True but not the experience in Australia. In fact it simply allowed “non-mainstream” people on both sides to gain a few seats but the broad split in terms of “left and right” was replicated in Parliament. Sounds good? Hmm. Statement of (non) conflict of interest. I am one of 3 world experts in a way of eliciting public preferences called Best-Worst Scaling. Dutch/Belgian groups applied it (with NO input/knowledge from me) to voting to give “Most-Least Voting”. A “scaled back” version of RCV in which you ONLY indicate “top” and “bottom” candidate/party. This is because people are lousy at “middle rankings”. I once hypothesised a scenario as to where RCV could give a seriously problematic result which MLV might avoid. To be honest I considered it a theoretical curiosity. Until it happened in the Iowa 2016 Democrat Primary. Essentially there was a dead heat (approx 49.75% each) for Sanders and Hillary. O’Malley came a distant third with 0.5%.

 

 RCV would have essentially given 0.5% of voters in a single state the decision as to who won and got the crucial “momentum” that might have made them unstoppable. MLV would probably have gone as follows: Sanders supporters put him as most desired and Hillary as least (49.75-49.75=0% net score). Vice versa for Hillary. O’Malley gets 0.5%; which of Sanders/Hillary is “last” depends on whatever O’Malley’s supporters think is the “worst evil”. I don’t know who they’d have chosen. But it doesn’t matter. He’d have won.

 

 Many will say “for the last placed first-voted individual to win is a travesty”. I’d reply “why is this any less of a travesty than 0.5% of Iowa Democrats potentially deciding who faces Trump?” Depends how you phrase it. Arrow’s Impossibility Theorem all over again. Effectively Iowa was a tie in which neither Sanders nor Hillary “deserved” a win. It should have caused the “race” to move on. Instead O’Malley dropped out. You see how the SAME votes can lead to VERY different results depending on the system (likelihood function)? After all, MLV with 3 candidates *IS* RCV! At least in the “ranking” you give. But the AGGREGATION and WEIGHTING is different.

 

 Final note – for those who think I’m plugging something “I devised” – you haven’t read the post. I WISH I’d been involved with voting theory. But it’s one area I had NOTHING to do with using BWS. I admire the Dutch and Belgians for applying it this way and it has, in fact, been used in Baltic States. So it ain’t some weird theoretical curiosity. Ironically it MIGHT lead to some centrist “mediocrities”. But given Hillary and Trump, maybe that might not have been so bad?

 

 RCV is a step forward. But be careful what you wish for. Personally I think that redistricting to eliminate uncompetitive seats (gerrymandering) and other aspects of electoral reform are at least as important as changing the voting system. I’ve given the references before on here and elsewhere. Happy to engage in constructive discussion since NO voting system is fair.

 

 

DCE references

I’ve promised academic references for certain statements in last few blog entries. Here they are, numbered according to numbers in the articles elsewhere:

[1] Specification Error in Probit Models. Adonis Yatchew and Zvi Griliches. The Review of Economics and Statistics: Vol. 67, No. 1 (Feb., 1985), pp. 134-139 (6 pages) – THIS PAPER SHOWS WHY YOU MUST ADJUST FOR DIFFERENT VARIANCES BEFORE AGGREGATING HUMANS ELSE YOU GET BIAS NOT SIMPLY INCONSISTENCY. RELEVANT TO LOGIT OR PROBIT MODELS.

[2] Combining sources of preference data. Journal of Econometrics. David Hensher, Jordan Louviere, Joffre Swait. Journal of Econometrics: Volume 89, Issues 1–2, 26 November 1998, Pages 197-221- THIS PAPER SHOWS THEORETICALLY AND EMPIRICALLY WHY YOU MUST NET OUT VARIANCE DIFFERENCES BETWEEN DATA SOURCES (INCLUDING SUBJECTS) BEFORE AGGREGATING THEM.

[3] Confound it! That Pesky little scale constant messes up our convenient assumptions. Jordan Louviere & Thomas Eagle. USER-ACCESSIBLE EXPLANATION OF VARIANCE ISSUE IF [1] AND [2] UNAVAILABLE.

[4] Best-Worst Scaling: Theory, Methods and Applications. Jordan Louviere, Terry N Flynn, Anthony AJ Marley. Cambridge University Press (2015).

[5] The role of the scale parameter in estimation and comparison of multinomial logit models. Joffre Swait & Jordan Louviere. JMR 30(3): 305-314.

 

Perils of discrete choices in vax and surveys – SPOILERS FOR AVENGERS ENDGAME

NB: Edits done between 16th and 18th June 2021 for clarity and explain certain statistical concepts. Key references as of 18th now included, which link to post here.

Warning – for some potential LULZ I’m going to use the Marvel Cinematic Universe (MCU) as a way to explain variability in outcomes. If you haven’t watched up to and including Avengers Endgame and don’t want spoilers go watch them first!

I find myself having to explain why “this” or “that” study (frequently survey based, but sometimes a clinical study) has results that should be regarded as deeply suspect. I hope to provide some user-friendly insights here into why people like me can be very suspicious of such studies and what you as the average reader should look out for.

To aid exposition, I’ll do the old trick of “giving the punchline first, so those who don’t want the mathematics and logic can skip it”.

In many contexts you get a “one shot” for an individual – “live/die” or “cure/non-cure” or “Democrat/Republican/Libertarian/Green” in a general election etc. You don’t know the variability. Would that person – call her Mrs Smith – have displayed the same result in 14,000,604 other parallel universes which in key respects are the same as ours? MCU fans will instantly recognise this number. Spoiler alert – We’re getting into the territory of sci-fi and the Marvel Cinematic Universe here! When I say “variability” I am SPECIFICALLY referring to the variability ONE person – Mrs Smith – exhibits. Would she ALWAYS do “action p”? Or would she, in different universes, do “action q”?

Unfortunately we observe actions in ONE universe. We don’t in reality OBSERVE the “other” 14,000,604 universes – we are not Doctor Strange. So we don’t know. So scientists assume “what we observe” is, in some sense, the TRUE effect. For the statistical minded who want to say “GOTCHA – we have standard errors and confidence intervals etc to handle uncertainty” I’ll reply, go down and read the asterisked section because we clearly are NOT on the same page here.* (TL;DR – that “variability” says NOTHING about what Mrs Smith’s variation is in different universes but is based on variation ACROSS SUBJECTS who might be LIKE Mrs Smith. The two are potentially VERY VERY different.)

This can lead to spectacularly bad outcomes but scientists defend themselves by saying “we had no choice! You can’t have a do-over like in the MCU and see if the patient dissolves to dust under a different stimulus!” Calculating the “variation intrinsic to patient Smith” is impossible. I’m here to discuss how, if we’re really interested and willing to put resources into it, we CAN find out the variation in many circumstances (though not a Thanos or other “death” event) – DID A “RESPONSIVE” PERSON RESPOND BECAUSE THEY ALWAYS WILL OR DID WE SEE THE ONE OUT OF 14,000,605 OCCASIONS IN WHICH THEY DID? DID THE SET OF STIMULI REQUIRED TO REVERSE THE SNAP (KILLING HALF OF ALL LIVING THINGS) – ONE OUT OF 14,000,605 – REQUIRE IRON MAN TO DIE? (UNFORTUNATELY, YES). AVENGERS ENDGAME IS NOT JUST POPCORN SCI-FI BUT A SUBTLE AND CLEVER EXPLANATION OF DISCRETE CHOICE MODELLING.

To start this discussion we’ve got to go back to stats 101. Heteroscedasticity. You’ll have been shown it in a graph probably. As “x” values increase, so do “y”. However, the RANGE of values for y increases – you see a “funnel” with the tip pointed towards the lower left and it getting wider as you move toward the upper right. If you do a least squares regression (to explain y values from a given x value) you’ll get the right “average” answer. However, your measure of “certainty” or “variability” – the standard error and thus confidence interval – will be wrong. This is what I’ll call a “nuisance”. Your “main answer” (vector of betas, showing the average effect of an explanatory variable on the outcome) will be correct on average, but your level of confidence will not and needs to be adjusted. There are methods to make this adjustment, so all can be well. Continuous outcomes (like GDP, blood pressure, etc) are easily analysed using such methods. “Discrete” (one out of two or more) outcomes (yes/no……die/live……Democrat/Republican/Libertarian/Green) are not. [1,2,3,5] But here’s the important takeaway I’m going to explain – heteroscedasticity in models with continuous outcomes (GPD/Blood pressure etc) is just a “nuisance”. Heteroscedasticity in limited dependent variable models (yes/no……Democrat/Republican…..etc) is a FUNDAMENTAL PROBLEM CAUSING BIAS, not just “standard errors that must be tweaked” – you usually DON’T KNOW THE DIRECTION OF THE BIAS, LET ALONE ITS MAGNITUDE. YOU ARE ROYALLY SCREWED.

 

Discrete Outcomes and Their Problems

Discrete outcomes are generally coded (0,1) or (0,1,2) or (0,1,2,3) etc. These numbers usually have no intrinsic numerical meaning. They could be (a,b) or (a,b,c) or (a,b,c,d). However, the point is, to adequately understand “what is going on underneath” you usually need something that links a discrete outcome to some (albeit hypothesised) latent (unobserved) continuous numerical scale. Let’s consider 1950s America when party allegiance generally was on a single left/right dimension with “Democrat” arbitrarily chosen as the “positive” end of the scale and “Republican” as the negative. Reverse these if you want – it makes NO difference to my argument.

Mrs Smith has an underlying position on the latent “party allegiance scale”. If it is positive, she is more likely to vote Democrat and vice versa. Note she is NOT guaranteed to vote one way or the other. I’m just saying, if she is “strongly positive” then the chances of her switching to the Republican Party are small, but not zero. As her position on the “party allegiance scale” zooms up to positive infinity, the chances of her voting Democrat asymptotically approach one (but NEVER get to one).

There needs to be a “link function” to relate a (hypothesised) position on the latent party allegiance scale with a discrete choice (let’s keep it simple with just Democrat or Republican for now). Different academic disciplines prefer different functions – some prefer those based on the normal (Gaussian) distribution whilst others favour those based on the logistic distribution. In practice it makes practically no difference to the answer. Those of us who like the logistic function do so because it is a “closed form” function when there are 3+ outcomes – in other words there is a mathematical formula that can be “maximised” using an established technique. The multinomial probit doesn’t – you have to “brute force attack” it. Doing brute force means you must make sure your “peak” is not the tallest mountain in the Appalachians but is, in fact, Mount Everest. The logistic “maximisation routine” can make the same mistake but it’s generally easier to spot mistakes and get to Everest first time.

The link function relates a position on the latent party scale to a discrete (Democrat/Republican) outcome. What in fact it is doing here is doing the reverse – OBSERVING Mrs Smith’s voting and (via the link function) INFERRING her position on the latent party allegiance scale.

How do you infer a position on a continuous scale from a (0,1) response (Democrat or Republican) in traditional logit/probit studies?

You typically look at the “other factors” defining Mrs Smith, her circumstances etc and most importantly, draw strength from OTHER people in your sample. See the problem? You are already making inferences on the basis of people who are NOT Mrs Smith. This leads directly to the issue of “variability” and so I’ll put the dreaded asterisk in for you stats bods.* We are inferring information from people “like Mrs Smith”. Just because they’re female, similar age, sociodemographics etc, does NOT mean they are useful in placing Mrs Smith accurately on the latent scale. You really need to see what she’d do in various scenarios, NOT what people “seemingly like her” do.

So, this process of making inferences is incredibly dangerous unless you have seen Mrs Smith’s choice/outcome behaviour under a bunch of different scenarios. If you have merely observed her in the ONLY ONE out of 14,000,605 universes where she voted Democrat then you’ll vastly over-estimate her loyalty to the Democrats. If she in fact votes democrat in all 14+ million universes then you’re on more solid ground. The point is, YOU DON’T KNOW (unless you’re Doctor Strange).

As it happens, Dr Strange looked at 14,000,605 universes. He saw that there was one, and ONLY one “intervention” that could lead to victory (Thanos being defeated and Earth being restored). Unfortunately the “intervention” required Tony Stark to die. He held up one quivering finger at the crucial moment in the movie to show that the “Endgame” required Tony to act. Tony was clever enough to know what this meant. Which is partly why the movie is so family-blogging great. The Russo brothers, directing, knew what they were doing. Tony “did action x” and the one outcome that we “needed” out of a possible 14,000,605 outcomes came to pass. And he died. But Earth and the universe was restored.

Discrete Choice Modelling (Discrete Choice Experiments – DCEs) As a Solution

How do we escape the above conundrum? In effect, we try to simulate some “key universes” from the 14+ million that allow us to better understand “how committed Mrs Smith is to the Democrats”. We do this via a Discrete Choice Experiment or DCE.

How, broadly, do DCEs work? In essence you vary all the “key stimuli” that influence a decision in some pre-defined, statistical way way as to “simulate” “key universes”. Then see what the person does in each one. You can then quantify the effect each stimulus has on that person’s utility function. Note carefully. I say “that person”. The optimal DCE can be done ON A SINGLE PERSON. You map their “demand surface” (multi-dimensional version of the classic “demand curve”) across multiple dimensions so can “plug in” any combination of levels of stimuli and predict if they’d choose a or b, Republican or Democrat…..or whatever your outcome is.

For the stats nerds, you must have non-negative degrees of freedom. In other words, for 8 estimates (“effect of this level of this stimulus is a degree of freedom”) you must have 8+ choices made by Mrs Smith. I’ve done this before. Incidentally because I know the rules well I know when I can break them – I use least squares regression on discrete outcomes (0/1). You are taught NEVER to do this. Indeed, unless you know what you are doing that is good advice. But under certain circumstances it tells you a lot. Not in terms of “anything you’ll quote in published results” but as a “quick and dirty” way of identifying individuals who have extreme preferences and “getting into the tails of the logistic distribution”. I write macros that “find me anyone whose choices are entirely dictated according to whether the hypothesised election involves elimination of incomce tax”….or whatever.

This “cheat” is a good way to test an absolutely key problem with DCEs – that tow people, one old, one young, might have IDENTICAL views, but the younger person seems to “respond more readily” to stimuli. I’ll group a bunch of 30 year olds and see their “beta estimates” are all quite skewed – they are strongly affect (ITS SEEMS) by the stimuli. I group a bunch of 70 year olds and see their “beta estimates” are much smaller. They SEEM to be less affected by the stimuli. But if the “pattern of betas” is the same I know both age groups in fact have the same preferences. …..what is merely happening is that the “error rate” is higher among the 70 year olds – something commonly seen. The average level of age-related cognitive impairment is higher and they “mistakenly” choose the “less preferred” manifesto more often, “diluting” their estimates. However, when the chips are down in a real world situation (like the voting booth) they might be no different from their younger brethren. They concentrate hard and take voting seriously. Suddenly that 60/40 split seen in the SP data among the “oldies” becomes 80/20 (just like the young’uns) in the “real” – Revealed Preference (RP) data.

Important point: in a Stated Preference (SP) study you deliberately “assume away” complicating factors. Frequencies then become very skewed. In a Revealed preference study (RP) – REALITY) – there is all sorts of cr$p that impinges upon your choices – frequencies tend to move towards 1/n where n is the number of response categories. So people often look “less sure” in real life with less skewed choice frequencies but this just reflects the fact life is complicated and people don’t “assume everything else is equal”. Remember this.

How do you Design a DCE in Practice?

Firstly, do qualitative work to find out what stimuli (attributes) matter to respondents (those which cause their choices to change, depending on what “level” the attribute takes).

Then, to design the “scenarios” you typically design a DCE in one of two ways. The first way is broadly Bayesian. You use prior knowledge as to what levels of the attributes “typically influence subjects like Mrs Smith”. You then construct a design – a series of pairs of manifestoes in the 1950s political example, one Democrat, one Republican – that vary these attributes in ways that “quickly and efficiently” establish what issues or groups of issues cause Mrs Smith to change her vote. Think of a bunch of dots in the (x,y) plane of a graph grouped around the best-fit line. You don’t bother to ask about manifestoes that are “far away” from Mrs Smith’s likely solution. This method gives you incredibly accurate estimates. BUT if your priors about Mrs Smith are wrong, you’ll get an answer that is “incredibly accurate but wrong”. It’s all very well to know how many millimetres the highest peak in the Appalachians is but that’s no bloody use since Everest is where you should be measuring!

The second way utilises “orthogonal designs”. These essentially vary the stimuli (attribute levels) so they keep moving at right angles so as to “cover all the utility space”. The downside is that some “pseudo-elections” involve manifestoes with policies that are pretty unrealistic. The upside is that “you’ve covered all possibilities”. You go looking for peaks in Bangladesh. Top tip – that’s dumb. But ultimately, because you are ALSO looking at the Nepalase plateau, you’ll find Everest. IS this better or worse? It depends. If you use Bayesian methods sensibly you can do at least as well as orthogonal designs. But they are a delicate power tool not to be used by the non-expert.

Consider the original iphone. Whilst a phone that utilised some form of interactive screen had been around, the iphone was genuinely revolutionary. Using “priors” based on the Blackberry or Windows phones using a wand would have been seriously dangerous. The iphone “thought outside the box”. That’s when an orthogonal design is best. Sometimes Mrs Smith doesn’t KNOW what she’d like until a hypothetical product is mocked up and shown on the screen. Bayesian methods would “stick to variations on what is available”. Orthogonal designs “imagine stuff” – OK, at the expense of asking dumb questions about Bangladesh mountains…..but whilst I value both approaches, I prefer orthogonal designs. Cause I like to investigate “new stuff”. Sometimes the “peak” is something new you never knew was there.

Reliability of DCE (Stated Preference) Answers

Do people answer DCE scenarios honestly? Some do, some don’t. But the important thing is that experienced researchers like me know how to spot liars, those who have “done their best but have a big dose of uncertainty” and those who “are pretty sure of their answers” (even if their honestly given answers happen to be wrong).

How do liars give themselves away? Well, a DCE is typically operating in 5+ dimensions. The number of attributes (stimuli) defines the number of dimensions. If you lie – either just to be annoying or to get through the study quickly and collect reward points in some online panel – then you must lie in a way that “works across all the dimensions”. Humans generally can’t do this. I can often spot liars a mile off just eyeballing data. If you’re going to fool me you must remember your answers in 5+ dimensions and use some sort of crib sheet to ensure your lies are consistent and your “preferred options” all conform to your “story”.

If you suddenly prefer a cellphone with no memory and elsewhere you have made clear you like a lot of storage to look at porn family pictures then it will raise a red flag in my data checks. Because you’ve HAD to sacrifice something else. And when you make totally mutually contradictory choices I get suspicious. (Quite apart from the fact I’m male and we are no angels and I’ve yet to see a male who likes an electronic device that can’t “store a lot of stuff”).

So if you want to mess with me you’ve got to do it in 5+ dimensions. And I practically guarantee you can’t. Which means I give your ID to the panel owner. They typically won’t take action straightaway. However, when University of X samples you for THEIR study, they see the “flag” stating “this person might take the p!ss”. If they find you’ve clicked through quickly to get points and money quickly by paying no attention to the questions or are p!ssing about you know what happens? The panel company deletes your points ($). Bad luck – you just got booted out for being a d!ckhead.

TOP TIP – IF YOU’RE GOING TO LIE, MAKE SURE YOU CAN DO IT IN 9 DIMENSIONS BECAUSE OTHERWISE WE’LL SPOT YOU. YOU’LL BE BLACKLISTED PRONTO.

So it’s really much more hassle to lie in 9 dimensions than to just answer honestly. So don’t do it. If I’m unsure about you I won’t suggest you be blacklisted but I WILL instruct my analysis program to place you in the group with “higher variance” (i.e. you are “less consistent” and might have a tendency to give mutually contradictory answers). You’ll have a lower contribution to the final solution – I am unsure if you’re just someone who might have an odd demand function I’ve never before encountered, someone cognitively challenged by the task or you are taking the p*ss.

What about “other” stated preference methods? Willingness-to-pay is the one that has a much longer pedigree, but also is much more contentious. People like Yves Smith of NakedCapitalism have expressed unease with WTP estimates. I happen to agree with her – despite the fact I’ve co-authored with one of the world’s top WTP experts, Richard Carson of UCSD whose figures were used in the ExxonValdez settlement. I just find it hard to believe most humans can “think up a valid number to value something”. Sorry Richard. However, I HAVE used WTP on occasion when I’ve had no other choice. It’s just one of those tools I think must be used very very carefully.

Where Next?

DCEs will, going back to our 1950s politics example, present Mrs Smith with a series of hypothetical, but REALISTIC, election scenarios. If you use orthogonal designs then unfortunately SOME scenarios might seem odd…..but with knowledge and experience you can minimise the number of silly scenarios. So, present hypothetical manifestos (one per party). The key policies of each manifesto are the “stimuli” which are varied. Ask her to state her preferred party each time. If, out of 16 pairs, she says “Democrat” every time then this is both good and bad. Good in that we can “segment her off” as a diehard Democrat. Bad in that she violates a KEY assumption of regression models – that she has an “error” term. She is “deterministic” not “probabilistic”. Thus keeping her in ANY regression model leads to bias (at least, any regression model based on limited dependent variable outcomes – our logit and probit models).

In terms of why you should be “netting her out if she says Democrat every time” please read the asterisked bit. A DCE, in ideal situation, should be doable for a SINGLE PERSON. Try running a logit/probit regression in which the dependent variable doesn’t vary. The program (Stata or whatever) will crash or give an error – no variation in dependent variable. Mrs Smith should not be in the regression! The “link function” is not designed to cope with 100% choices.

Why “Non-Experts” Make Horrid Mistakes

You might think adding Mr Patel, who does switch party depending on policy, solves things. I have lost count of the number of studies I refereed who thought this was “a solution”.

The thinking is, “OK 100% + 50%”: average = 75% and that can go into the link function to give an average utility, right? WRONG. The Link function can’t deal with 100% but 75% is fine. However, 100% from Mrs Smith is actually “infinite utility”. Do you know what you’ve actually done? You’ve asked “what’s the average of infinity and (say) ten?” Do you realise what a dumb question that is? IF you think taking averages involving infinity is OK you should transfer out of all mathematical subjects pronto. Go do drama or liberal arts.

You KNOW Mrs Smith votes Democrat no matter what. Use some family blogging common sense – this is what frustrates me so much about people using these models “cause they look cool”. Separate her out.

Results?

Results from a limited dependent variable model (logit/probit) are notoriously hard to interpret. Those who have seen my objections on NakedCapitalism will know what is coming. Here is a hopefully abbreviated version of the explanation.

Here’s the math. We can solve the likelihood function to find what mean and variance are “most likely” to produce the pattern of data we observed. Great. However in limited dependent variable models it hits a wall. Why? The mean and variance (technically a function of the variance) are multiplied by each other. So, let’s say you get the “solution” of “8”. If it is meanxvariance is this 8×1? Or 4×2? Or 2×4? Or 1×8? Or any of an infinite number of combinations.

The underlying problem is that we have ONE equation with TWO unknowns. You need a SECOND equation in order to split the mean and variance.

A Second Equation?

A second set of data can come from many sources. McFadden (who won the “Economics Nobel”) used real data from people’s decisions regarding public transport to do this – it “calibrated his model” (in other words, gave the right starting numbers upon which all the rest resided!). Using  real data – “ Revealed Preferences” – as opposed to the “Stated Preferences” discussed here is very useful.[2,5] RP data is good but is useless when you want to design something fundamentally new or “think outside the box”.

Where Next?

Attitudes have proven to be a salvation.[4] Segment people according to attitudes then the “uncalibrated voting data” you have can suddenly become highly predictive. It did for me at the 2017 General Election! 🙂

However, whenever I see a story with “rates” quoted for groups……be it “views on vaccination” or anything else, I roll my eyes. Why? Because whilst “patterns” (“definitely will vaccinate/probably will vaccinate/not sure/ probably won’t/definitely won’t) are somewhat stable, the initial calibration is often wrong. Thus, the media just yesterday in UK breathlessly pronounced that the proportion of “vaccine hesitant” people who did, in fact, get vaccinated was really high. 20 years of working in the field told me something……..the skewed proportions expressed in stated preference studies are almost NEVER reproduced in real life. I knew full well that the proportion of people refusing the vaccine would NOT BE SKY HIGH AS THE MEDIA SUGGESTED – in fact when “revealed preferences” became available, we’d see that whilst the “pattern” across the (say, 5) response categories might be maintained, the raw rates would stop being so skewed and would be “squashed” towards 20% each. In effect, this “sky high” number of anti-vaxxers would be squashed towards one fifth (all the response category proportions would get closer to 1/n where n is the number of response categories). It might even be the case that the pattern was changed and the “no, never” bunch would go below 20%. THAT is why you must “calibrate” stated preference data.

Dr Strange knew there was only one combination of stimuli that would lead to Thanos being destroyed in such a way that the “snap” was reversed. Unfortunately that combination required Tony Stark to die too. Sometimes the “hugely unlikely” is what is observed. It all depends on what is being changed around us.

TL;DR: Next time you see a study saying the “proportion of people who will engage in a behaviour x is some outlandishly highor low y%” you should immediately be wary.

*There is an absolutely CRUCIAL point to be made here regarding DCEs and which will head off (I hope) the criticism that I know a bunch of readers with statistical knowledge are just itching to say. The “standard errors” and hence confidence intervals around “utility estimates” or “party loyalty estimates” or whatever you are modelling in a logit/probit model, are calculated by looking at between-subject variation. DCEs are, at their core, NOT about between-subject variation. Their default assumption is that Mrs Smith’s “degree of certainty” is NOT the same as Mr Patel’s. In a traditional regression with a continuous outcome, all these different “types of variation” tend to come out in the wash and you’ll get an answer that might be imprecise…..but it’s unbiased.

In a limited dependent variable model (logit/probit), this heteroscedasticity (differences in “sureness”/”certainty”/”level of understanding”/”engagement with the task” between people) causes BIAS (not just a nuisance but an unaddressable problem). This is why in traditional logit models (probit too but logit are more common) like a clinical study with ONE endpoint per patient (cure/not……live/die…..) you can’t measure within-subject variation because you YOU ONLY HAVE ONE DATAPOINT! The “quantification of variability” is ENTIRELY BETWEEN-SUBJECT variation. Mathematical psychologists have shown for half a century that humans are inconsistent in many many areas. A well-designed DCE can measure this. I’ve done it, finding exactly what my predecessors found – lower levels of education, advanced age, etc, all cause you to be MORE INCONSISTENT (higher variance, manifesting as small beta estimates and choice frequencies closer to 1/n). Until you “net this out” you can’t aggregate Mrs Smith and Mr Patel’s data. THIS IS WHY PSEPHOLOGISTS GET ELECTION PREDICTIONS WRONG SO OFTEN.

 

STAY TUNED FOR PART 2: WHY INTERPRETATION OF DISCRETE DATA IS FREQUENTLY SO VERY VERY WRONG

Most_least Voting(2)

Most-Least Voting – Questions raised – some of which were serious, some I suspect were “rabble-rousing”. I’ve edited to reduce snark and generally tried to give benefit of the doubt, even though I know some people really should just go out more……

Arrow’s Theorem only applies to generic voting. Fair results can be obtained if particulars are taken into account. When you only have a few candidates MLV is not what you’d go for. With a huge pool of eligible candidates, say 1000, all available for say 9 seats, then Cumulative vote tallying is ideal.

Reference please.

“Also something polsci experts often fail to consider is degree of polarization. You don’t have to have just “like” vs “dislike”, you can have a Likert scale on degree of like/dislike, and use it to weight the votes, so that a polarizing candidate who is less polarizing than the other still has a chance to be ahead of the milquetoast centrist. I know, I know, requires fairly sophisticated voters, but worth a shot some time in experimental research trials.”

Likert scaling assumes distances between each choice (answer option) are equal. Please provide references from the mathematical psychology literature showing this to be true. (I’ll save you time – there are none. My co-author was editor of the top journal –JMP – for almost 40 years and never encountered a study showing this. He is AAJ Marley.).  I could quote you amusing anecdotes like the fact traditional Chinese older people associate the character for number 4 with death so avoid it. Statisticians then spend yonks trying to work out if dips at number 4 are “real” or “due to cultural stuff”.  Please stop throwing up new terms like “likert” when it is merely expressing a phenomenon I discredited in my postings before.

San Francisco city government, supervisors, sheriff and district attorney are chosen by ranked choice voting. That, combined with district elections for supervisors, has resulted in a parade of ineffectual, sometimes dangerous, political mediocrities, a chaotic disaster, controlled by the Democratic County Central Committee. If a voter fails to choose three candidates, their vote is thrown out.

You say ranked choice choice voting – I’m not defending that – so your point is?

Some supervisors have been elected with less than 25% of the vote.

Choose from Hillary, Trump and any run of the mill US politician in the centre. Why does LESS THAN 25% “MEAN THEY ARE ILLEGITIMATE”?  – “Top” candidatees don’t matter under MLV if they also disgust a huge number of the rest of the population. This is NOT ranked voting (which YOU talk about). Please actually address my discussed voting system and don’t straw man.

It’s horses for course to get around Arrow. In other words, you select the most appropriate voting system for the size of the candidate pool and the seats being vied for.

I said your latter statement at the start. Why are you presenting this as a “new insight”? Arrow always said you make your moral judgments, based on “values” and the “system”, THEN you can choose the system that best achieves these. As to “get around Arrow”. Nope.

While it is an interesting fad, there is no real guarantee that rigging elections to favor centrists will get you better government. As it happens, I am a Libertarian. Some of my ill-advised fellow party members argue vociferously for ranked choice voting or the like. I attempt to point out to them that RCV tends to guarantee that my party will never win elections, but the RCV faithful will not listen.

Where did I say that MLV rigs elections in favour of centrists? I merely quoted an observation from the Dutch/Belgian researchers that centrists probably stand a better chance of being elected. If you have data showing that MLV disproportionately benefits centrists at the expense of others please quote it – PARTICULARLY in a multidimensional format (which even the continental Euroepan authors do not). Note I also said that in a MULTI-DIMENSIONAL world, the concept of a “centrist” is less meaningful. MLV could get you your libertarianism (in getting govt out of the bedroom). Please stop putting words into my mouth.

There’s a lot of talk about candidates and parties, but not a lot of talk about policy.

One way to create significant momentum to deal with global climate change is to place high taxes onto fossil fuels. As Illinois recently demonstrated, this is highly unpopular.

In either Ranked Choice or Most-Least systems, how do necessary but unpopular policies get enacted?

I’m not going to claim miracles. Just as under ANY other voting scheme, there must be a critical mass of people who “see the peril” and vote accordingly. MLV at least allows these people to “veto” candidates who totally dismiss the environmental issues. So it isn’t “the solution” but it may be “ a quicker solution.” One big benefit of MLV is that it is probably the system that gives the greatest “veto power” to any majority of the population whose candidate(s) didn’t make it into government. So in the UK, the strong environmental lobby crossing all the “progressive parties” who keep losing elections could start exercising real power via their “least” votes.

Links for MLV

Nakedcapitalism.com very kindly reposted my last entry.

It was somewhat lacking in links.

I intend, when I’m feeling up to it, to put all the links in a posting. Stay tuned, they WILL appear.

Thanks.

Ranked choice and most-least voting

I recently realised that two systems proposed as “PR-lite” or “a step towards full PR” can produce radically different outcomes IN REALITY and not just as a THEORETICAL CURIOSITY. The two are “single candidate ranked choice” and “most-least voting – MLV”, most notably when there are just three candidates.

Here’s the deal. Under ranked choice you must rank all three candidates, 1, 2 & 3 (most preferred to least preferred). Under most-least voting you indicate only the “most preferred” (rank 1) and least preferred (with 3 candidates, rank 3). The OBSERVED set of data should be the same. (I’m not going to get into the issue of why they might not – that gets into complex mathematics and I’ll do it another time).

For those who don’t want to get bogged down in the following discussion of the maths, here’s why the two systems can, given EXACTLY the same observed count data, give a different “winning candidate”. Ranked voting essentially tries to identify the (first or second best) candidate that the people-supporting-the-losing-3rd-party-candidate are “most happy with”. Under MLV, if both “first” and “second” preference candidates are diametrically opposite (and mutually hated) then NEITHER should necessarily be elected. The candidate who came a (very very) distant third can be elected if (s)he is NOT HATED by anyone. Essentially, if you polarise the electorate you are penalised. A “centrist” who hasn’t either “enthused” or “repelled” anyone will win under MLV.

I tended to think this was a “theoretical curiousity”. However, upon looking more closely at the 2016 Iowa Democratic Presidential primary I realised this ACTUALLY HAPPENED. Bernie Sanders and Hillary Clinton were essentially tied on about 49.5% each in terms of their “primary first preference vote”. Hillary had the edge, and the 3rd candidate, O’Malley dropped out (but too late so he got votes). Yet he was actually the key influencer, if either ranked voting or MLV had been used. Under ranked choice, either Hillary or Bernie would have won (determined by who the majority of O’Malley’s supporters put as second preference). Under MLV, and assuming that the “much talked about antipathy between Bernie and Hillary was real” then each candidate’s supporters would have put the other as “least preferred”. The “most-minus-least” counts would have been slightly negative for one and likely both candidates. O’Malley, on the other hand, would have obtained a small positive net most-minus-least vote (getting 1 to 2% of the vote, with few/no people putting him as “least preferred”). MLV simply subtracts the “least preferred” total from the “most preferred” total for each candidate giving a “net support rating”.

Under ranked choice voting either Hillary or Bernie would have won. Under MLV both would have been denied the win in favour of O’Malley, because he “pissed nobody off”.

Here’s the more detailed discussion.

Most-Least Voting (MLV) is a special case of a more general method of “stated preferences” called Best-Worst Scaling (BWS). Declaration of interest: I am a co-author on the definitive CUP textbook on BWS, was involved (along with its inventor) in much of the theoretical development and application in various fields (most notably health). HOWEVER I have had no involvement with the theory, parameterisation or application of MLV. Indeed, once I became aware of this method of voting, on checking the bibliography, it became clear that the authors were not actually aware of BWS and due to the “silo effect” in academia, had come up with it largely independently of what we had already done. Incidentally some of the Baltic States have used or do use MLV in certain instances so it isn’t just a “theoretical curiosity”.

OK, having got that out the way, what do I think of MLV? In short, I think it is worthy of serious consideration and wish we’d thought of it first! Like ranked choice voting with single member constituencies (something in use or proposed in various Anglo-Saxon countries like Australia, the UK and USA), it is not “proper” Proportional Representation (PR). However, it can be considered either as a nice compromise, or as a stepping stone to “full PR”. In terms of its similarities to ranked choice voting: suppose there are 5 candidates in your constituency. Under ranked choice, for the maths to not be horribly skewed and potentially very very gameable, you should be forced to rank all five, 1,2,3,4,5. The problem, known since the mid 1960s, is that people are good at “top” and “bottom” ranks but get very “random” and arbitrary “in the middle”. MLV exploits this. It only asks for top and bottom. Thus it may be considered to be the “minimum change to first-past-the-post – FPTP – possible” so as to “make things easy for people”. You only provide ONE extra piece of information – the candidate/Party you like least. If you do not provide both a MOST and a LEAST choice then your ballot is spoilt. This is IMPERATIVE for the maths to work, and for the system to be demonstrably “equitable”. (Most-minus-least vote totals must sum to zero.)

The common question is “Suppose there are only three candidates – aren’t ranked choice and MLV the same?” NO. See above for a real life example. Ranked choice MIGHT be unconstitutional in certain countries (if the mathematicians and lawyers got together because not everyone has the “same influence” mathematically).

So what is happening in practice?  The authors conclude that if the “FPTP winning” candidate espouses (say) a very extreme policy on (say) immigration or something, that all other parties abhor, then (s)he is likely to lose. All other parties “gang up” and place that candidate as “least”. Most-minus-least vote tally is net (highly?) negative. A more “moderate” candidate likely wins. Indeed, the authors claim that “centrists” likely prevail a lot of the time – though they might be an “O’Malley with 1% primary vote”. Though if a candidate would get a MAJORITY (and not just a PLURALITY) under FPTP, they’ll still win under MLV. So “majority” (non-coalition) governments still can happen – they’re just harder to achieve and “third parties” (etc) much more easily get a foothold. I happen to think that this “centrists rule” conclusion is a little simplistic when you move from a single dimension (left/right) to multidimensional space. Yes, maybe you get a candidate closest to the centroid across all dimensions but “how strongly people regard each dimension” can affect results. So, as they frustratingly say in academic papers, it’s “an empirical issue” as to what will happen. However, I will venture a conclusion that “extremists” will naturally get weeded out. Whilst some extremists might be generally considered bad (consider dictators who were first voted in via pluralities in 1930s Europe), others (painted as “extremists” by the MSM like a Sanders today or an Attlee or FDR of yesteryear) could be considered necessary and without them society would be much worse off. It gets necessarily subjective here…!

TL;DR: Arrow’s Impossibility Theorem still holds. MLV doesn’t solve all problems but it is attractive in addressing a lot of the most commonly made criticisms of voting systems used in the UK and USA. However, it isn’t the ONLY system that can address these criticisms – it is merely the “simplest” in terms of practicality and requiring “minimum extra effort by voters beyond what they do now”. Whether you “like it” depends on your “values”.