Category Archives: DCE support

DCE references

I’ve promised academic references for certain statements in last few blog entries. Here they are, numbered according to numbers in the articles elsewhere:

[1] Specification Error in Probit Models. Adonis Yatchew and Zvi Griliches. The Review of Economics and Statistics: Vol. 67, No. 1 (Feb., 1985), pp. 134-139 (6 pages) – THIS PAPER SHOWS WHY YOU MUST ADJUST FOR DIFFERENT VARIANCES BEFORE AGGREGATING HUMANS ELSE YOU GET BIAS NOT SIMPLY INCONSISTENCY. RELEVANT TO LOGIT OR PROBIT MODELS.

[2] Combining sources of preference data. Journal of Econometrics. David Hensher, Jordan Louviere, Joffre Swait. Journal of Econometrics: Volume 89, Issues 1–2, 26 November 1998, Pages 197-221- THIS PAPER SHOWS THEORETICALLY AND EMPIRICALLY WHY YOU MUST NET OUT VARIANCE DIFFERENCES BETWEEN DATA SOURCES (INCLUDING SUBJECTS) BEFORE AGGREGATING THEM.

[3] Confound it! That Pesky little scale constant messes up our convenient assumptions. Jordan Louviere & Thomas Eagle. USER-ACCESSIBLE EXPLANATION OF VARIANCE ISSUE IF [1] AND [2] UNAVAILABLE.

[4] Best-Worst Scaling: Theory, Methods and Applications. Jordan Louviere, Terry N Flynn, Anthony AJ Marley. Cambridge University Press (2015).

[5] The role of the scale parameter in estimation and comparison of multinomial logit models. Joffre Swait & Jordan Louviere. JMR 30(3): 305-314.

 

Perils of discrete choices in vax and surveys – SPOILERS FOR AVENGERS ENDGAME

NB: Edits done between 16th and 18th June 2021 for clarity and explain certain statistical concepts. Key references as of 18th now included, which link to post here.

Warning – for some potential LULZ I’m going to use the Marvel Cinematic Universe (MCU) as a way to explain variability in outcomes. If you haven’t watched up to and including Avengers Endgame and don’t want spoilers go watch them first!

I find myself having to explain why “this” or “that” study (frequently survey based, but sometimes a clinical study) has results that should be regarded as deeply suspect. I hope to provide some user-friendly insights here into why people like me can be very suspicious of such studies and what you as the average reader should look out for.

To aid exposition, I’ll do the old trick of “giving the punchline first, so those who don’t want the mathematics and logic can skip it”.

In many contexts you get a “one shot” for an individual – “live/die” or “cure/non-cure” or “Democrat/Republican/Libertarian/Green” in a general election etc. You don’t know the variability. Would that person – call her Mrs Smith – have displayed the same result in 14,000,604 other parallel universes which in key respects are the same as ours? MCU fans will instantly recognise this number. Spoiler alert – We’re getting into the territory of sci-fi and the Marvel Cinematic Universe here! When I say “variability” I am SPECIFICALLY referring to the variability ONE person – Mrs Smith – exhibits. Would she ALWAYS do “action p”? Or would she, in different universes, do “action q”?

Unfortunately we observe actions in ONE universe. We don’t in reality OBSERVE the “other” 14,000,604 universes – we are not Doctor Strange. So we don’t know. So scientists assume “what we observe” is, in some sense, the TRUE effect. For the statistical minded who want to say “GOTCHA – we have standard errors and confidence intervals etc to handle uncertainty” I’ll reply, go down and read the asterisked section because we clearly are NOT on the same page here.* (TL;DR – that “variability” says NOTHING about what Mrs Smith’s variation is in different universes but is based on variation ACROSS SUBJECTS who might be LIKE Mrs Smith. The two are potentially VERY VERY different.)

This can lead to spectacularly bad outcomes but scientists defend themselves by saying “we had no choice! You can’t have a do-over like in the MCU and see if the patient dissolves to dust under a different stimulus!” Calculating the “variation intrinsic to patient Smith” is impossible. I’m here to discuss how, if we’re really interested and willing to put resources into it, we CAN find out the variation in many circumstances (though not a Thanos or other “death” event) – DID A “RESPONSIVE” PERSON RESPOND BECAUSE THEY ALWAYS WILL OR DID WE SEE THE ONE OUT OF 14,000,605 OCCASIONS IN WHICH THEY DID? DID THE SET OF STIMULI REQUIRED TO REVERSE THE SNAP (KILLING HALF OF ALL LIVING THINGS) – ONE OUT OF 14,000,605 – REQUIRE IRON MAN TO DIE? (UNFORTUNATELY, YES). AVENGERS ENDGAME IS NOT JUST POPCORN SCI-FI BUT A SUBTLE AND CLEVER EXPLANATION OF DISCRETE CHOICE MODELLING.

To start this discussion we’ve got to go back to stats 101. Heteroscedasticity. You’ll have been shown it in a graph probably. As “x” values increase, so do “y”. However, the RANGE of values for y increases – you see a “funnel” with the tip pointed towards the lower left and it getting wider as you move toward the upper right. If you do a least squares regression (to explain y values from a given x value) you’ll get the right “average” answer. However, your measure of “certainty” or “variability” – the standard error and thus confidence interval – will be wrong. This is what I’ll call a “nuisance”. Your “main answer” (vector of betas, showing the average effect of an explanatory variable on the outcome) will be correct on average, but your level of confidence will not and needs to be adjusted. There are methods to make this adjustment, so all can be well. Continuous outcomes (like GDP, blood pressure, etc) are easily analysed using such methods. “Discrete” (one out of two or more) outcomes (yes/no……die/live……Democrat/Republican/Libertarian/Green) are not. [1,2,3,5] But here’s the important takeaway I’m going to explain – heteroscedasticity in models with continuous outcomes (GPD/Blood pressure etc) is just a “nuisance”. Heteroscedasticity in limited dependent variable models (yes/no……Democrat/Republican…..etc) is a FUNDAMENTAL PROBLEM CAUSING BIAS, not just “standard errors that must be tweaked” – you usually DON’T KNOW THE DIRECTION OF THE BIAS, LET ALONE ITS MAGNITUDE. YOU ARE ROYALLY SCREWED.

 

Discrete Outcomes and Their Problems

Discrete outcomes are generally coded (0,1) or (0,1,2) or (0,1,2,3) etc. These numbers usually have no intrinsic numerical meaning. They could be (a,b) or (a,b,c) or (a,b,c,d). However, the point is, to adequately understand “what is going on underneath” you usually need something that links a discrete outcome to some (albeit hypothesised) latent (unobserved) continuous numerical scale. Let’s consider 1950s America when party allegiance generally was on a single left/right dimension with “Democrat” arbitrarily chosen as the “positive” end of the scale and “Republican” as the negative. Reverse these if you want – it makes NO difference to my argument.

Mrs Smith has an underlying position on the latent “party allegiance scale”. If it is positive, she is more likely to vote Democrat and vice versa. Note she is NOT guaranteed to vote one way or the other. I’m just saying, if she is “strongly positive” then the chances of her switching to the Republican Party are small, but not zero. As her position on the “party allegiance scale” zooms up to positive infinity, the chances of her voting Democrat asymptotically approach one (but NEVER get to one).

There needs to be a “link function” to relate a (hypothesised) position on the latent party allegiance scale with a discrete choice (let’s keep it simple with just Democrat or Republican for now). Different academic disciplines prefer different functions – some prefer those based on the normal (Gaussian) distribution whilst others favour those based on the logistic distribution. In practice it makes practically no difference to the answer. Those of us who like the logistic function do so because it is a “closed form” function when there are 3+ outcomes – in other words there is a mathematical formula that can be “maximised” using an established technique. The multinomial probit doesn’t – you have to “brute force attack” it. Doing brute force means you must make sure your “peak” is not the tallest mountain in the Appalachians but is, in fact, Mount Everest. The logistic “maximisation routine” can make the same mistake but it’s generally easier to spot mistakes and get to Everest first time.

The link function relates a position on the latent party scale to a discrete (Democrat/Republican) outcome. What in fact it is doing here is doing the reverse – OBSERVING Mrs Smith’s voting and (via the link function) INFERRING her position on the latent party allegiance scale.

How do you infer a position on a continuous scale from a (0,1) response (Democrat or Republican) in traditional logit/probit studies?

You typically look at the “other factors” defining Mrs Smith, her circumstances etc and most importantly, draw strength from OTHER people in your sample. See the problem? You are already making inferences on the basis of people who are NOT Mrs Smith. This leads directly to the issue of “variability” and so I’ll put the dreaded asterisk in for you stats bods.* We are inferring information from people “like Mrs Smith”. Just because they’re female, similar age, sociodemographics etc, does NOT mean they are useful in placing Mrs Smith accurately on the latent scale. You really need to see what she’d do in various scenarios, NOT what people “seemingly like her” do.

So, this process of making inferences is incredibly dangerous unless you have seen Mrs Smith’s choice/outcome behaviour under a bunch of different scenarios. If you have merely observed her in the ONLY ONE out of 14,000,605 universes where she voted Democrat then you’ll vastly over-estimate her loyalty to the Democrats. If she in fact votes democrat in all 14+ million universes then you’re on more solid ground. The point is, YOU DON’T KNOW (unless you’re Doctor Strange).

As it happens, Dr Strange looked at 14,000,605 universes. He saw that there was one, and ONLY one “intervention” that could lead to victory (Thanos being defeated and Earth being restored). Unfortunately the “intervention” required Tony Stark to die. He held up one quivering finger at the crucial moment in the movie to show that the “Endgame” required Tony to act. Tony was clever enough to know what this meant. Which is partly why the movie is so family-blogging great. The Russo brothers, directing, knew what they were doing. Tony “did action x” and the one outcome that we “needed” out of a possible 14,000,605 outcomes came to pass. And he died. But Earth and the universe was restored.

Discrete Choice Modelling (Discrete Choice Experiments – DCEs) As a Solution

How do we escape the above conundrum? In effect, we try to simulate some “key universes” from the 14+ million that allow us to better understand “how committed Mrs Smith is to the Democrats”. We do this via a Discrete Choice Experiment or DCE.

How, broadly, do DCEs work? In essence you vary all the “key stimuli” that influence a decision in some pre-defined, statistical way way as to “simulate” “key universes”. Then see what the person does in each one. You can then quantify the effect each stimulus has on that person’s utility function. Note carefully. I say “that person”. The optimal DCE can be done ON A SINGLE PERSON. You map their “demand surface” (multi-dimensional version of the classic “demand curve”) across multiple dimensions so can “plug in” any combination of levels of stimuli and predict if they’d choose a or b, Republican or Democrat…..or whatever your outcome is.

For the stats nerds, you must have non-negative degrees of freedom. In other words, for 8 estimates (“effect of this level of this stimulus is a degree of freedom”) you must have 8+ choices made by Mrs Smith. I’ve done this before. Incidentally because I know the rules well I know when I can break them – I use least squares regression on discrete outcomes (0/1). You are taught NEVER to do this. Indeed, unless you know what you are doing that is good advice. But under certain circumstances it tells you a lot. Not in terms of “anything you’ll quote in published results” but as a “quick and dirty” way of identifying individuals who have extreme preferences and “getting into the tails of the logistic distribution”. I write macros that “find me anyone whose choices are entirely dictated according to whether the hypothesised election involves elimination of incomce tax”….or whatever.

This “cheat” is a good way to test an absolutely key problem with DCEs – that tow people, one old, one young, might have IDENTICAL views, but the younger person seems to “respond more readily” to stimuli. I’ll group a bunch of 30 year olds and see their “beta estimates” are all quite skewed – they are strongly affect (ITS SEEMS) by the stimuli. I group a bunch of 70 year olds and see their “beta estimates” are much smaller. They SEEM to be less affected by the stimuli. But if the “pattern of betas” is the same I know both age groups in fact have the same preferences. …..what is merely happening is that the “error rate” is higher among the 70 year olds – something commonly seen. The average level of age-related cognitive impairment is higher and they “mistakenly” choose the “less preferred” manifesto more often, “diluting” their estimates. However, when the chips are down in a real world situation (like the voting booth) they might be no different from their younger brethren. They concentrate hard and take voting seriously. Suddenly that 60/40 split seen in the SP data among the “oldies” becomes 80/20 (just like the young’uns) in the “real” – Revealed Preference (RP) data.

Important point: in a Stated Preference (SP) study you deliberately “assume away” complicating factors. Frequencies then become very skewed. In a Revealed preference study (RP) – REALITY) – there is all sorts of cr$p that impinges upon your choices – frequencies tend to move towards 1/n where n is the number of response categories. So people often look “less sure” in real life with less skewed choice frequencies but this just reflects the fact life is complicated and people don’t “assume everything else is equal”. Remember this.

How do you Design a DCE in Practice?

Firstly, do qualitative work to find out what stimuli (attributes) matter to respondents (those which cause their choices to change, depending on what “level” the attribute takes).

Then, to design the “scenarios” you typically design a DCE in one of two ways. The first way is broadly Bayesian. You use prior knowledge as to what levels of the attributes “typically influence subjects like Mrs Smith”. You then construct a design – a series of pairs of manifestoes in the 1950s political example, one Democrat, one Republican – that vary these attributes in ways that “quickly and efficiently” establish what issues or groups of issues cause Mrs Smith to change her vote. Think of a bunch of dots in the (x,y) plane of a graph grouped around the best-fit line. You don’t bother to ask about manifestoes that are “far away” from Mrs Smith’s likely solution. This method gives you incredibly accurate estimates. BUT if your priors about Mrs Smith are wrong, you’ll get an answer that is “incredibly accurate but wrong”. It’s all very well to know how many millimetres the highest peak in the Appalachians is but that’s no bloody use since Everest is where you should be measuring!

The second way utilises “orthogonal designs”. These essentially vary the stimuli (attribute levels) so they keep moving at right angles so as to “cover all the utility space”. The downside is that some “pseudo-elections” involve manifestoes with policies that are pretty unrealistic. The upside is that “you’ve covered all possibilities”. You go looking for peaks in Bangladesh. Top tip – that’s dumb. But ultimately, because you are ALSO looking at the Nepalase plateau, you’ll find Everest. IS this better or worse? It depends. If you use Bayesian methods sensibly you can do at least as well as orthogonal designs. But they are a delicate power tool not to be used by the non-expert.

Consider the original iphone. Whilst a phone that utilised some form of interactive screen had been around, the iphone was genuinely revolutionary. Using “priors” based on the Blackberry or Windows phones using a wand would have been seriously dangerous. The iphone “thought outside the box”. That’s when an orthogonal design is best. Sometimes Mrs Smith doesn’t KNOW what she’d like until a hypothetical product is mocked up and shown on the screen. Bayesian methods would “stick to variations on what is available”. Orthogonal designs “imagine stuff” – OK, at the expense of asking dumb questions about Bangladesh mountains…..but whilst I value both approaches, I prefer orthogonal designs. Cause I like to investigate “new stuff”. Sometimes the “peak” is something new you never knew was there.

Reliability of DCE (Stated Preference) Answers

Do people answer DCE scenarios honestly? Some do, some don’t. But the important thing is that experienced researchers like me know how to spot liars, those who have “done their best but have a big dose of uncertainty” and those who “are pretty sure of their answers” (even if their honestly given answers happen to be wrong).

How do liars give themselves away? Well, a DCE is typically operating in 5+ dimensions. The number of attributes (stimuli) defines the number of dimensions. If you lie – either just to be annoying or to get through the study quickly and collect reward points in some online panel – then you must lie in a way that “works across all the dimensions”. Humans generally can’t do this. I can often spot liars a mile off just eyeballing data. If you’re going to fool me you must remember your answers in 5+ dimensions and use some sort of crib sheet to ensure your lies are consistent and your “preferred options” all conform to your “story”.

If you suddenly prefer a cellphone with no memory and elsewhere you have made clear you like a lot of storage to look at porn family pictures then it will raise a red flag in my data checks. Because you’ve HAD to sacrifice something else. And when you make totally mutually contradictory choices I get suspicious. (Quite apart from the fact I’m male and we are no angels and I’ve yet to see a male who likes an electronic device that can’t “store a lot of stuff”).

So if you want to mess with me you’ve got to do it in 5+ dimensions. And I practically guarantee you can’t. Which means I give your ID to the panel owner. They typically won’t take action straightaway. However, when University of X samples you for THEIR study, they see the “flag” stating “this person might take the p!ss”. If they find you’ve clicked through quickly to get points and money quickly by paying no attention to the questions or are p!ssing about you know what happens? The panel company deletes your points ($). Bad luck – you just got booted out for being a d!ckhead.

TOP TIP – IF YOU’RE GOING TO LIE, MAKE SURE YOU CAN DO IT IN 9 DIMENSIONS BECAUSE OTHERWISE WE’LL SPOT YOU. YOU’LL BE BLACKLISTED PRONTO.

So it’s really much more hassle to lie in 9 dimensions than to just answer honestly. So don’t do it. If I’m unsure about you I won’t suggest you be blacklisted but I WILL instruct my analysis program to place you in the group with “higher variance” (i.e. you are “less consistent” and might have a tendency to give mutually contradictory answers). You’ll have a lower contribution to the final solution – I am unsure if you’re just someone who might have an odd demand function I’ve never before encountered, someone cognitively challenged by the task or you are taking the p*ss.

What about “other” stated preference methods? Willingness-to-pay is the one that has a much longer pedigree, but also is much more contentious. People like Yves Smith of NakedCapitalism have expressed unease with WTP estimates. I happen to agree with her – despite the fact I’ve co-authored with one of the world’s top WTP experts, Richard Carson of UCSD whose figures were used in the ExxonValdez settlement. I just find it hard to believe most humans can “think up a valid number to value something”. Sorry Richard. However, I HAVE used WTP on occasion when I’ve had no other choice. It’s just one of those tools I think must be used very very carefully.

Where Next?

DCEs will, going back to our 1950s politics example, present Mrs Smith with a series of hypothetical, but REALISTIC, election scenarios. If you use orthogonal designs then unfortunately SOME scenarios might seem odd…..but with knowledge and experience you can minimise the number of silly scenarios. So, present hypothetical manifestos (one per party). The key policies of each manifesto are the “stimuli” which are varied. Ask her to state her preferred party each time. If, out of 16 pairs, she says “Democrat” every time then this is both good and bad. Good in that we can “segment her off” as a diehard Democrat. Bad in that she violates a KEY assumption of regression models – that she has an “error” term. She is “deterministic” not “probabilistic”. Thus keeping her in ANY regression model leads to bias (at least, any regression model based on limited dependent variable outcomes – our logit and probit models).

In terms of why you should be “netting her out if she says Democrat every time” please read the asterisked bit. A DCE, in ideal situation, should be doable for a SINGLE PERSON. Try running a logit/probit regression in which the dependent variable doesn’t vary. The program (Stata or whatever) will crash or give an error – no variation in dependent variable. Mrs Smith should not be in the regression! The “link function” is not designed to cope with 100% choices.

Why “Non-Experts” Make Horrid Mistakes

You might think adding Mr Patel, who does switch party depending on policy, solves things. I have lost count of the number of studies I refereed who thought this was “a solution”.

The thinking is, “OK 100% + 50%”: average = 75% and that can go into the link function to give an average utility, right? WRONG. The Link function can’t deal with 100% but 75% is fine. However, 100% from Mrs Smith is actually “infinite utility”. Do you know what you’ve actually done? You’ve asked “what’s the average of infinity and (say) ten?” Do you realise what a dumb question that is? IF you think taking averages involving infinity is OK you should transfer out of all mathematical subjects pronto. Go do drama or liberal arts.

You KNOW Mrs Smith votes Democrat no matter what. Use some family blogging common sense – this is what frustrates me so much about people using these models “cause they look cool”. Separate her out.

Results?

Results from a limited dependent variable model (logit/probit) are notoriously hard to interpret. Those who have seen my objections on NakedCapitalism will know what is coming. Here is a hopefully abbreviated version of the explanation.

Here’s the math. We can solve the likelihood function to find what mean and variance are “most likely” to produce the pattern of data we observed. Great. However in limited dependent variable models it hits a wall. Why? The mean and variance (technically a function of the variance) are multiplied by each other. So, let’s say you get the “solution” of “8”. If it is meanxvariance is this 8×1? Or 4×2? Or 2×4? Or 1×8? Or any of an infinite number of combinations.

The underlying problem is that we have ONE equation with TWO unknowns. You need a SECOND equation in order to split the mean and variance.

A Second Equation?

A second set of data can come from many sources. McFadden (who won the “Economics Nobel”) used real data from people’s decisions regarding public transport to do this – it “calibrated his model” (in other words, gave the right starting numbers upon which all the rest resided!). Using  real data – “ Revealed Preferences” – as opposed to the “Stated Preferences” discussed here is very useful.[2,5] RP data is good but is useless when you want to design something fundamentally new or “think outside the box”.

Where Next?

Attitudes have proven to be a salvation.[4] Segment people according to attitudes then the “uncalibrated voting data” you have can suddenly become highly predictive. It did for me at the 2017 General Election! 🙂

However, whenever I see a story with “rates” quoted for groups……be it “views on vaccination” or anything else, I roll my eyes. Why? Because whilst “patterns” (“definitely will vaccinate/probably will vaccinate/not sure/ probably won’t/definitely won’t) are somewhat stable, the initial calibration is often wrong. Thus, the media just yesterday in UK breathlessly pronounced that the proportion of “vaccine hesitant” people who did, in fact, get vaccinated was really high. 20 years of working in the field told me something……..the skewed proportions expressed in stated preference studies are almost NEVER reproduced in real life. I knew full well that the proportion of people refusing the vaccine would NOT BE SKY HIGH AS THE MEDIA SUGGESTED – in fact when “revealed preferences” became available, we’d see that whilst the “pattern” across the (say, 5) response categories might be maintained, the raw rates would stop being so skewed and would be “squashed” towards 20% each. In effect, this “sky high” number of anti-vaxxers would be squashed towards one fifth (all the response category proportions would get closer to 1/n where n is the number of response categories). It might even be the case that the pattern was changed and the “no, never” bunch would go below 20%. THAT is why you must “calibrate” stated preference data.

Dr Strange knew there was only one combination of stimuli that would lead to Thanos being destroyed in such a way that the “snap” was reversed. Unfortunately that combination required Tony Stark to die too. Sometimes the “hugely unlikely” is what is observed. It all depends on what is being changed around us.

TL;DR: Next time you see a study saying the “proportion of people who will engage in a behaviour x is some outlandishly highor low y%” you should immediately be wary.

*There is an absolutely CRUCIAL point to be made here regarding DCEs and which will head off (I hope) the criticism that I know a bunch of readers with statistical knowledge are just itching to say. The “standard errors” and hence confidence intervals around “utility estimates” or “party loyalty estimates” or whatever you are modelling in a logit/probit model, are calculated by looking at between-subject variation. DCEs are, at their core, NOT about between-subject variation. Their default assumption is that Mrs Smith’s “degree of certainty” is NOT the same as Mr Patel’s. In a traditional regression with a continuous outcome, all these different “types of variation” tend to come out in the wash and you’ll get an answer that might be imprecise…..but it’s unbiased.

In a limited dependent variable model (logit/probit), this heteroscedasticity (differences in “sureness”/”certainty”/”level of understanding”/”engagement with the task” between people) causes BIAS (not just a nuisance but an unaddressable problem). This is why in traditional logit models (probit too but logit are more common) like a clinical study with ONE endpoint per patient (cure/not……live/die…..) you can’t measure within-subject variation because you YOU ONLY HAVE ONE DATAPOINT! The “quantification of variability” is ENTIRELY BETWEEN-SUBJECT variation. Mathematical psychologists have shown for half a century that humans are inconsistent in many many areas. A well-designed DCE can measure this. I’ve done it, finding exactly what my predecessors found – lower levels of education, advanced age, etc, all cause you to be MORE INCONSISTENT (higher variance, manifesting as small beta estimates and choice frequencies closer to 1/n). Until you “net this out” you can’t aggregate Mrs Smith and Mr Patel’s data. THIS IS WHY PSEPHOLOGISTS GET ELECTION PREDICTIONS WRONG SO OFTEN.

 

STAY TUNED FOR PART 2: WHY INTERPRETATION OF DISCRETE DATA IS FREQUENTLY SO VERY VERY WRONG

Most_least Voting(2)

Most-Least Voting – Questions raised – some of which were serious, some I suspect were “rabble-rousing”. I’ve edited to reduce snark and generally tried to give benefit of the doubt, even though I know some people really should just go out more……

Arrow’s Theorem only applies to generic voting. Fair results can be obtained if particulars are taken into account. When you only have a few candidates MLV is not what you’d go for. With a huge pool of eligible candidates, say 1000, all available for say 9 seats, then Cumulative vote tallying is ideal.

Reference please.

“Also something polsci experts often fail to consider is degree of polarization. You don’t have to have just “like” vs “dislike”, you can have a Likert scale on degree of like/dislike, and use it to weight the votes, so that a polarizing candidate who is less polarizing than the other still has a chance to be ahead of the milquetoast centrist. I know, I know, requires fairly sophisticated voters, but worth a shot some time in experimental research trials.”

Likert scaling assumes distances between each choice (answer option) are equal. Please provide references from the mathematical psychology literature showing this to be true. (I’ll save you time – there are none. My co-author was editor of the top journal –JMP – for almost 40 years and never encountered a study showing this. He is AAJ Marley.).  I could quote you amusing anecdotes like the fact traditional Chinese older people associate the character for number 4 with death so avoid it. Statisticians then spend yonks trying to work out if dips at number 4 are “real” or “due to cultural stuff”.  Please stop throwing up new terms like “likert” when it is merely expressing a phenomenon I discredited in my postings before.

San Francisco city government, supervisors, sheriff and district attorney are chosen by ranked choice voting. That, combined with district elections for supervisors, has resulted in a parade of ineffectual, sometimes dangerous, political mediocrities, a chaotic disaster, controlled by the Democratic County Central Committee. If a voter fails to choose three candidates, their vote is thrown out.

You say ranked choice choice voting – I’m not defending that – so your point is?

Some supervisors have been elected with less than 25% of the vote.

Choose from Hillary, Trump and any run of the mill US politician in the centre. Why does LESS THAN 25% “MEAN THEY ARE ILLEGITIMATE”?  – “Top” candidatees don’t matter under MLV if they also disgust a huge number of the rest of the population. This is NOT ranked voting (which YOU talk about). Please actually address my discussed voting system and don’t straw man.

It’s horses for course to get around Arrow. In other words, you select the most appropriate voting system for the size of the candidate pool and the seats being vied for.

I said your latter statement at the start. Why are you presenting this as a “new insight”? Arrow always said you make your moral judgments, based on “values” and the “system”, THEN you can choose the system that best achieves these. As to “get around Arrow”. Nope.

While it is an interesting fad, there is no real guarantee that rigging elections to favor centrists will get you better government. As it happens, I am a Libertarian. Some of my ill-advised fellow party members argue vociferously for ranked choice voting or the like. I attempt to point out to them that RCV tends to guarantee that my party will never win elections, but the RCV faithful will not listen.

Where did I say that MLV rigs elections in favour of centrists? I merely quoted an observation from the Dutch/Belgian researchers that centrists probably stand a better chance of being elected. If you have data showing that MLV disproportionately benefits centrists at the expense of others please quote it – PARTICULARLY in a multidimensional format (which even the continental Euroepan authors do not). Note I also said that in a MULTI-DIMENSIONAL world, the concept of a “centrist” is less meaningful. MLV could get you your libertarianism (in getting govt out of the bedroom). Please stop putting words into my mouth.

There’s a lot of talk about candidates and parties, but not a lot of talk about policy.

One way to create significant momentum to deal with global climate change is to place high taxes onto fossil fuels. As Illinois recently demonstrated, this is highly unpopular.

In either Ranked Choice or Most-Least systems, how do necessary but unpopular policies get enacted?

I’m not going to claim miracles. Just as under ANY other voting scheme, there must be a critical mass of people who “see the peril” and vote accordingly. MLV at least allows these people to “veto” candidates who totally dismiss the environmental issues. So it isn’t “the solution” but it may be “ a quicker solution.” One big benefit of MLV is that it is probably the system that gives the greatest “veto power” to any majority of the population whose candidate(s) didn’t make it into government. So in the UK, the strong environmental lobby crossing all the “progressive parties” who keep losing elections could start exercising real power via their “least” votes.

Links for MLV

Nakedcapitalism.com very kindly reposted my last entry.

It was somewhat lacking in links.

I intend, when I’m feeling up to it, to put all the links in a posting. Stay tuned, they WILL appear.

Thanks.

Ranked choice and most-least voting

I recently realised that two systems proposed as “PR-lite” or “a step towards full PR” can produce radically different outcomes IN REALITY and not just as a THEORETICAL CURIOSITY. The two are “single candidate ranked choice” and “most-least voting – MLV”, most notably when there are just three candidates.

Here’s the deal. Under ranked choice you must rank all three candidates, 1, 2 & 3 (most preferred to least preferred). Under most-least voting you indicate only the “most preferred” (rank 1) and least preferred (with 3 candidates, rank 3). The OBSERVED set of data should be the same. (I’m not going to get into the issue of why they might not – that gets into complex mathematics and I’ll do it another time).

For those who don’t want to get bogged down in the following discussion of the maths, here’s why the two systems can, given EXACTLY the same observed count data, give a different “winning candidate”. Ranked voting essentially tries to identify the (first or second best) candidate that the people-supporting-the-losing-3rd-party-candidate are “most happy with”. Under MLV, if both “first” and “second” preference candidates are diametrically opposite (and mutually hated) then NEITHER should necessarily be elected. The candidate who came a (very very) distant third can be elected if (s)he is NOT HATED by anyone. Essentially, if you polarise the electorate you are penalised. A “centrist” who hasn’t either “enthused” or “repelled” anyone will win under MLV.

I tended to think this was a “theoretical curiousity”. However, upon looking more closely at the 2016 Iowa Democratic Presidential primary I realised this ACTUALLY HAPPENED. Bernie Sanders and Hillary Clinton were essentially tied on about 49.5% each in terms of their “primary first preference vote”. Hillary had the edge, and the 3rd candidate, O’Malley dropped out (but too late so he got votes). Yet he was actually the key influencer, if either ranked voting or MLV had been used. Under ranked choice, either Hillary or Bernie would have won (determined by who the majority of O’Malley’s supporters put as second preference). Under MLV, and assuming that the “much talked about antipathy between Bernie and Hillary was real” then each candidate’s supporters would have put the other as “least preferred”. The “most-minus-least” counts would have been slightly negative for one and likely both candidates. O’Malley, on the other hand, would have obtained a small positive net most-minus-least vote (getting 1 to 2% of the vote, with few/no people putting him as “least preferred”). MLV simply subtracts the “least preferred” total from the “most preferred” total for each candidate giving a “net support rating”.

Under ranked choice voting either Hillary or Bernie would have won. Under MLV both would have been denied the win in favour of O’Malley, because he “pissed nobody off”.

Here’s the more detailed discussion.

Most-Least Voting (MLV) is a special case of a more general method of “stated preferences” called Best-Worst Scaling (BWS). Declaration of interest: I am a co-author on the definitive CUP textbook on BWS, was involved (along with its inventor) in much of the theoretical development and application in various fields (most notably health). HOWEVER I have had no involvement with the theory, parameterisation or application of MLV. Indeed, once I became aware of this method of voting, on checking the bibliography, it became clear that the authors were not actually aware of BWS and due to the “silo effect” in academia, had come up with it largely independently of what we had already done. Incidentally some of the Baltic States have used or do use MLV in certain instances so it isn’t just a “theoretical curiosity”.

OK, having got that out the way, what do I think of MLV? In short, I think it is worthy of serious consideration and wish we’d thought of it first! Like ranked choice voting with single member constituencies (something in use or proposed in various Anglo-Saxon countries like Australia, the UK and USA), it is not “proper” Proportional Representation (PR). However, it can be considered either as a nice compromise, or as a stepping stone to “full PR”. In terms of its similarities to ranked choice voting: suppose there are 5 candidates in your constituency. Under ranked choice, for the maths to not be horribly skewed and potentially very very gameable, you should be forced to rank all five, 1,2,3,4,5. The problem, known since the mid 1960s, is that people are good at “top” and “bottom” ranks but get very “random” and arbitrary “in the middle”. MLV exploits this. It only asks for top and bottom. Thus it may be considered to be the “minimum change to first-past-the-post – FPTP – possible” so as to “make things easy for people”. You only provide ONE extra piece of information – the candidate/Party you like least. If you do not provide both a MOST and a LEAST choice then your ballot is spoilt. This is IMPERATIVE for the maths to work, and for the system to be demonstrably “equitable”. (Most-minus-least vote totals must sum to zero.)

The common question is “Suppose there are only three candidates – aren’t ranked choice and MLV the same?” NO. See above for a real life example. Ranked choice MIGHT be unconstitutional in certain countries (if the mathematicians and lawyers got together because not everyone has the “same influence” mathematically).

So what is happening in practice?  The authors conclude that if the “FPTP winning” candidate espouses (say) a very extreme policy on (say) immigration or something, that all other parties abhor, then (s)he is likely to lose. All other parties “gang up” and place that candidate as “least”. Most-minus-least vote tally is net (highly?) negative. A more “moderate” candidate likely wins. Indeed, the authors claim that “centrists” likely prevail a lot of the time – though they might be an “O’Malley with 1% primary vote”. Though if a candidate would get a MAJORITY (and not just a PLURALITY) under FPTP, they’ll still win under MLV. So “majority” (non-coalition) governments still can happen – they’re just harder to achieve and “third parties” (etc) much more easily get a foothold. I happen to think that this “centrists rule” conclusion is a little simplistic when you move from a single dimension (left/right) to multidimensional space. Yes, maybe you get a candidate closest to the centroid across all dimensions but “how strongly people regard each dimension” can affect results. So, as they frustratingly say in academic papers, it’s “an empirical issue” as to what will happen. However, I will venture a conclusion that “extremists” will naturally get weeded out. Whilst some extremists might be generally considered bad (consider dictators who were first voted in via pluralities in 1930s Europe), others (painted as “extremists” by the MSM like a Sanders today or an Attlee or FDR of yesteryear) could be considered necessary and without them society would be much worse off. It gets necessarily subjective here…!

TL;DR: Arrow’s Impossibility Theorem still holds. MLV doesn’t solve all problems but it is attractive in addressing a lot of the most commonly made criticisms of voting systems used in the UK and USA. However, it isn’t the ONLY system that can address these criticisms – it is merely the “simplest” in terms of practicality and requiring “minimum extra effort by voters beyond what they do now”. Whether you “like it” depends on your “values”.