Category Archives: Economics

Myers-Briggs

Oh dear. Here we go again. What personality are you? How the Myers-Briggs test took over the world.

It gets boring shooting down M-B. It’s like shooting fish in a barrel. After all, when M-B compares unfavourably even to a questionnaire (see link to Quartz article about the Big-Five) that states:

Rather than giving an absolute score in each of the Big Five categories, they tell you your percentile in comparison to others within your gender

you know you’re in deep doo-doo. You’re making interpersonal comparisons. Contrary to the Guardian’s quoted criticisms of M-B as unrealistic binary choices, that is NOT its problem. Discrete choices are EXACTLY what you should be getting people to do. It is how you INTEPRET and ANALYSE them that matters. Some tips on judging these types of instrument:

  • Ensure they are based on a sound theoretical model. Schwartz’s List of Values is good because the types appear as segments of a kind of pie chart. Diametrically opposite types are on opposite sides of the circle whilst more similar ones are closer together.
  • If you can’t run a regression to give complete results for ONE person – without drawing on ANY information from ANY other person – then it’s bad.
  • The corollary to the above point is that you must statistically have positive degrees of freedom: more independent datapoints than parameters being estimated. Which means repeated choices. Which leads to:
  • You must get insights into an individual’s consistency (variance). Only in certain controversial areas of life do humans typically exhibit perfect consistency. Generally, kids, older people, people with lower levels of education and/or literacy display higher variances.

The kind of questions these questionnaires should be addressing are ones like “Of the multi-dimensional universes of “types”, which type or mixture of types best describes me, when I’ve been asked to do as many comparisons as possible?”

Even then, even if you get a proper statistical design (e.g. an orthogonal design), then two people might look very different in terms of their observed frequencies in agreeing with each statement. Person A has frequencies (estimated probabilities) that are all fairly squashed toward the size of the choice set: so if you’ve presented pairs, they’ll all be close to 0.5. Person B might have frequencies that all close to one and zero. If the PATTERN is the same, though, person A and person B are likely the same type of person. It’s just that for some reason person B was more consistent (lower variance) in answering.

I never worked on personality questionnaires but I did discuss issues with Geoff Soutar and Julie Lee when they came to work with Louviere many times during my 6 years in Sydney. So I know this stream of work quite well. Schwartz himself decided to “throw away” his old scoring system for the LoV – which necessarily spent many pages trying to net out person-specific heuristics – in favour of Best-Worst Scaling. BWS avoid getting people to use numbers. It uses the most natural way to make a choice, one from a few.

As a final note, this brings me back to a comment I’ve seen on NC by someone who was genuinely trying to be helpful in understanding the logit and probit models. Unfortunately the link was to a Stata working paper I’ve deliberately steered clear of because it all goes wrong in the final two pages.

Those “tricks” to understand means and variances? Dig out your logit/probit data for ONE individual. Can you run them? Unless you’ve been doing a well-designed discrete choice experiment you’re about to ask me “are you out of your mind? Everyone knows you get just a one or a zero for a person”. That, dear reader, is why the writer has not properly thought through this guide.

Predicted probabilities, BIC, etc are, in fact, all still potentially wrong because the likelihood function based on logt/probit models fixes the variance. So even following all the rules you can misinterpret the mean-variance split. You need external information. Which is why the “sterilising/non-sterilising vaccine” info regarding Sars-Cov2 is so crucial. I now can definitively rule out the “means model” – which is exactly what the conventional logit/probit models assume. So their results are wrong by design.

 

 

Revelation re Covid

Occasionally you are putting your thoughts into words and realise you finally “get” something. That happened today when explaining why I was suspicious of two papers “explaining” Sars-Cov2 (aka Covid-19) that were linked to by NakedCapitalism.com. NB NC were not “endorsing” these studies, merely putting them out there for discussion and critique. I duly did and had a revelation.

I know whether SARS-COV-2 has primarily mean or variance effects. It is mostly about variances. Which is the nightmare scenario. How did I come to this revelation? Well, as usual, it was by absorbing the wise words and  experience of those who are “at the front line”.

Here is the deal.

  • We know none of the vaccines for SARS-COV-2 are sterilising.
  • Thus you “catch” it more than once.
  • We know from breakthrough cases and rapid emergence of variants (that respond at differential rates to existing vaccines) that people don’t follow a binary model [0,1] – be protected through chance/vaccine or get Covid. They can get it 2+ times.
  • Thus we have a logit/probit model with variances – when it comes to a “latent scale of susceptibility to infection” people do not have a “mountain” that is shifted following a bout or a vaccine. The vaccine just flattens the mountain into a gentle hill. Less likely to get horrifically ill but high variance – they can get it multiple times.
  • The papers referred to, as do all the papers I’ve read so far, assume the vaccine effects are ENTIRELY BASED IN MEANS.
  • This is conceptually incompatible with what we know from the vaccines and what their manufacturers state (albeit in small print sometimes) – the vaccines are non-sterilising. They reduce symptom severity but don’t stop you getting SARS-Cov-2 again.
  • THUS A MODEL ASSUMING THE ODDS/RISK RATIOS ARE HEAVILY INFLUENCED BY VARIANCES NOT MEANS IS THE ONLY VAGUELY VALID ONE. MEAN-BASED ONES ARE AUTOMATICALLY WRONG. THEIR ESTIMATES ARE BIASED.
  • YET ALL THE PAPERS ARE ASSUMING MEANS, i.e. STERILISING VACCINES. WTF?

 

So what will be the final outcome? Basically ANY piece that doesn’t attempt (even in a rudimentary way) to separate, or at least comment on, the mean-variance confound and note that the evidence favours variances is not going to be read by me. It goes into the same class as “papers that try to explain flights via flat earth paradigms”. Garbage. Nice to finally have a good rule that enables me to implement a policy I’ve rarely had enough “concrete data” to support. However, the data and interpretation from the good people at places like NC have “solved” the mean-variance confound for me.

Any paper that quotes risk/odds ratios without discussing variances is trash. I’m not reading or commenting on it. Maybe I’ll print it out for use in the next toilet paper shortage? Full stop.

Covid postscript

Just a few comments to clarify complex ideas regarding “variances” in limited dependent variable models.

When I say “mountains are flattened”, I mean disease severity is reduced among that subgroup who were previously “hit hard”, but the burden of disease is spread more evenly across everyone. So in ten parallel universes, instead of the same 10% of people ALWAYS getting very ill, 90% of people will get somewhat ill. The particular 90% varies in each universe. You personally are no longer “assured” of getting (or not getting) covid. The variance goes up across the population. Though down for a lot of individuals who previously would have had the same outcome 10 out of 10 times.

Identifying people with zero variance is important. These people are deterministic, not probabilistic. They can’t be in a biostats (logit/probit) model. You just described them qualitatively according to what determines their disease status. Don’t attempt to “include them thinking that they just boost sample sizes and improve precision”. It’s like saying “I’m going to take an average of a bunch of numbers that includes infinity”. Dumb Dumb Dumb. These people, annoying though they are for logit/probit models, are actually useful in policy if you can find WHY they do/don’t get ill.

The KEY factor here is variance heterogeneity. If one group of people, in 100 parallel universes, experience 40 cases, but the SAME 40 people in all 100 universes, then they CANNOT be aggregated with another group of people who, in 100 parallel universes, experience on average 40 cases, but the 40 cases varies immensely across the 100 universes. Any universe from the first hundred has “consistency”. And universe from the second hundred has extreme inconsistency. Aggregating them can’t be done. It doesn’t, UNLIKE A LINEAR MODEL, just mess with standard errors. It causes BIAS.

 

COVID-19 variants. Statistical concerns

This piece draws heavily upon a piece published at NakedCapitalism. Pretty much all the references regarding epidemiological explanations and “on the ground” observations are there so in the interests of brevity (and my own schedule at the moment) I’ll simply give that as the main reference. I’ll put a few notes in regarding other issues though.

*****

I’ve written before that stated preference (SP) data using logit/probit models – examples of limited dependent variable models, so-called because the outcome isn’t continuous like GDP or blood pressure – are very hard to interpret [a]. Technically they have an infinite number of solutions. It is incumbent upon the researcher either to collect a second dataset, totally independent in nature of the first (so we now have two equations to solve for the two unknowns – mean and variance) or use experience and common sense to give us the most likely explanation (or a small number of likely ones). This is technically true of revealed preference data (actual observed decisions) too [b] and Covid-19 might be an unfolding horrific example of where we are pursuing the “wrong” interpretation of the observed outcomes.

Background: What’s happened in various “high vaccination” countries so far?

In short, rates of Covid-19 initially dropped through the floor, typically in line with vaccination coverage, then started bouncing back.  However, the large correlation with hospitalisation and death did not re-appear. This is consistent with the fact the vaccines are not “sterilising vaccines” – you can still catch Covid-19, it’s just that the vaccine is (largely) stopping the infection from playing havoc with your body.

Sounds like a step forward? Actually, without widespread adjunct interventions (good mask usage etc) to stop the spread in the first place, this is potentially very very bad. We’ve already seen variants arise. The Delta variant is causing increasing havoc, whilst Lambda is becoming dominant in South America. The Pfizer vaccine – which thanks to media failures was often touted as “the bestest evah” – seems particularly ill-equipped to deal with Delta. NC is covering this very well.

The bio-chemists and colleagues can give good explanations of WHAT is happening pharmacologically and epidemiologically in producing these variants. Our archetypal drunk lost his keys on the way back from the pub. However, just like the story, he’s looking for them only under the lamp-post, whilst they’re actually on the dark part of the road; if you can’t or won’t look in the right place of course you won’t find the solution. This is what many experts are doing and why Delta etc could keep happening and at an increasing pace and perhaps is the real story: one with roots in statistics.

What’s the possible statistical issue here?

Consider how medical statisticians (amongst others) typically think about discrete (infected/non-infected, or live/die) outcomes. As in the SP case the [0,1] outcome is incapable of giving you a separate mean – “average number of times a human – or particular subgroup – would get bad Covid-19 for a given level of exposure” – and variance “consistency of getting it it for this given level of exposure”. If 80% of Covid sufferers at a given exposure level needed hospital care but only 20% do when vaccinated, then analysts tend to think that the average number of people has gone down.

Suppose the “extreme opposite interpretation” (equally consistent with the observed data) is true? Suppose it’s a variance effect? So, the vaccine is not really – on average – bringing the theoretical average hospitalisation rate down. Or not by much anyway. It is simply “pushing what was a high peaked thin mountain into a fat, low altitude hill” in the vaccination function relating underlying Covid-19 status with observable key outcomes. Far more people are in the tails, with an emphasis on the “hey, now Covid is no big deal for me” end [c]. The odds of hospitalisation following vaccination goes way down. However, if you look at subgroups, you’ll (if you’re experienced) be spotting a tell-tale giveaway: the pattern of odds ratios across subgroups by vaccination status is VERY SIMILAR TO BEFORE, they have all just (for instance) halved. This is a trick I’ve used in SP data for decades and more often shows that some intervention has a variance effect. Fewer people are going to hospital if vaccinated but their average tendency to get a bad bout is actually unchanged by vaccination (particularly if we add the confounding factor of TIME – Covid is changing FAST).

This provides an ideal opportunity for the virus to quietly mutate, spread and via natural selection, find a variant that is more virulent but which, when coupled with fewer people taking precautions, gives a greater tendency for a variant to emerge that is both “longer incubating” but then potentially “suddenly more lethal”.

So vaccines were a bad thing?

At this point in time I’ll say “NO” [d]. However, in conjunction with bad human behaviour and an inability to think through the statistics, they have led to a complacency that might lead to worse long-term outcomes. The moral of the story is one that sites like NC have been emphasising since the start and which certain official medical and statistical authorities really dropped the ball on right from the get-go.

The vaccines merely bought us time. Time we wasted. Now a long-ignored problem with the logit (or probit) function, being the key tool we use to plug “discrete cases of disease” into a “function relating underlying Covid-19 to observed disease status” might be our undoing. Far fewer people are going to hospital following vaccination (smaller mean effect in terms of lethality) but a MUCH larger number of people have become juicy petri dishes for the virus to play in (larger variance). We have concentrated way too much on the former. The statistics textbooks tend to stress that explanation.

Trouble is, too few people read the small print at the bottom warning them that their logit/probit estimates could just as easily arise from variances, not means. Assume you observe:

  • an answer of 8 in non-vaccinated group. You assume mean prevalence=8 and (inverse of) variance=1, as the stats program always does: 8*1=8.
  • In vaccinated group you see answer of 4. Wow, the mean (prevalence due to vaccination effect) has halved prevalence because you ALWAYS ASSUME THE VARIANCE IS 1. So you “must” have got 4 via 4*1 because that is what you must do to get 4!

Oops. How you SHOULD have “divied up the mean and inverse of variance” was 8*1 in non-vaccinated group and 8*(1/2) in vaccinated group. You have a treatment effect that is in fact unchanged. The inverse of the variance halved – in other words the variance doubled. People less consistently got ill [e]

For someone like me who used to deal primarily with stated preference data the worst thing that could happen was that I’d lose the client when model-based predictions went wrong (because I’d made the wrong “split” between means and variances).

The stakes here are much much bigger. This piece is the “statistical issue” – a potential big misinterpretation of Covid-19 data – which really worries people like me.

*************

[a] See my blog and NC reprinted one of my posts.

[b] This is how and why Daniel McFadden won the so-called Economics Nobel – he predicted the demand for the BART in California extraordinarily accurately, before it was even built. He had both stated and revealed preference data on transport usage.

[c] You can’t keep symmetry if you keep squashing the mountain down. The right tail hits 100%. So you “see” a lot more people in the left tail (doing “well”) as a result of vaccination. This leads to the mean effect – so vaccination is unlikely to be 100% variance related. There must be a certain degree of mean effect here. My point is that the “real mean effect of vaccination” is theoretically a lot less than we observe from the data.

[d] With the “Keynes get out-clause”.

[e] The  actual logit and probit functions basically spit out a vector of “beta hats” but which are actually “true betas MULTIPLIED by AN INVERSE function of the variance on the latent scale”. So when variances go up – which in SP data happens when you get answers from people with lower literacy etc – then the “beta hats” (and hence odds ratios) all DECREASE in absolute magnitude. In other words, confusingly for non-stats people, we (to make the equation look less intimidating) tend to define a function of the variance (lambda – not to be confused with the Covid one) or mu that is MULTIPLICATIVE with the “true beta”. Believe me if you think this was a stupid simplification that will lead to confusion as people talk at cross-purposes you are not alone.

RCV vs MLV redux

Every time I think I’ve shown that ranked-choice voting is a step forward but no panacea, it appears again. Latest reply to a NakedCapitalism post below:

 

Ranked choice voting *sigh*. Don’t get me wrong: in choice between RCV vs FPTP (the status quo in most of USA and UK) and I go for RCV in a heartbeat, particularly since I have dual UK-Australian citizenship. However, there are people playing with fire here using arguments they don’t understand and if you’re interested in anecdotal evidence, those of us who spent decades eliciting public preferences in Australia came to despise the Conversation for its terrible editing etc which allows statements like, in the current article:

 

“Some critics incorrectly claim that ranked choice voting lets voters cast more than one ballot per person, when in fact each voter gets just one vote.”

 

 True but highly misleading and proponents will be in REAL trouble when the FPTP PMC class find the right “sound-bite” to counter this. Here’s one possibility: “RCV lets voters cast one vote but some votes are worth are a lot more than others”. My colleague Tony Marley couldn’t prove this was wrong in his PhD back in the 1960s and freaked. He ended up making a much milder statement. RCV (based on the rank ordered logit model) does NOT give everyone equal weight in the (log)likelihood function and this can matter hugely.

 

 South-West Norfolk is a UK Westminster Parliamentary constituency in which RCV would make ABSOLUTELY NO DIFFERENCE because the Conservatives practically always win it with way more than 50% of the primary vote – there would BE NO “second round” etc. The fact that at the last election a candidate for the Monster Raving Looney Party stood – something that only ever used to happen in the constituency of the sitting Prime Minister as a publicity stunt – shows that anger in the general population is spreading.

 

 Statements above in this thread have been that RCV can lead to “communism” or “mediocrities”. True but not the experience in Australia. In fact it simply allowed “non-mainstream” people on both sides to gain a few seats but the broad split in terms of “left and right” was replicated in Parliament. Sounds good? Hmm. Statement of (non) conflict of interest. I am one of 3 world experts in a way of eliciting public preferences called Best-Worst Scaling. Dutch/Belgian groups applied it (with NO input/knowledge from me) to voting to give “Most-Least Voting”. A “scaled back” version of RCV in which you ONLY indicate “top” and “bottom” candidate/party. This is because people are lousy at “middle rankings”. I once hypothesised a scenario as to where RCV could give a seriously problematic result which MLV might avoid. To be honest I considered it a theoretical curiosity. Until it happened in the Iowa 2016 Democrat Primary. Essentially there was a dead heat (approx 49.75% each) for Sanders and Hillary. O’Malley came a distant third with 0.5%.

 

 RCV would have essentially given 0.5% of voters in a single state the decision as to who won and got the crucial “momentum” that might have made them unstoppable. MLV would probably have gone as follows: Sanders supporters put him as most desired and Hillary as least (49.75-49.75=0% net score). Vice versa for Hillary. O’Malley gets 0.5%; which of Sanders/Hillary is “last” depends on whatever O’Malley’s supporters think is the “worst evil”. I don’t know who they’d have chosen. But it doesn’t matter. He’d have won.

 

 Many will say “for the last placed first-voted individual to win is a travesty”. I’d reply “why is this any less of a travesty than 0.5% of Iowa Democrats potentially deciding who faces Trump?” Depends how you phrase it. Arrow’s Impossibility Theorem all over again. Effectively Iowa was a tie in which neither Sanders nor Hillary “deserved” a win. It should have caused the “race” to move on. Instead O’Malley dropped out. You see how the SAME votes can lead to VERY different results depending on the system (likelihood function)? After all, MLV with 3 candidates *IS* RCV! At least in the “ranking” you give. But the AGGREGATION and WEIGHTING is different.

 

 Final note – for those who think I’m plugging something “I devised” – you haven’t read the post. I WISH I’d been involved with voting theory. But it’s one area I had NOTHING to do with using BWS. I admire the Dutch and Belgians for applying it this way and it has, in fact, been used in Baltic States. So it ain’t some weird theoretical curiosity. Ironically it MIGHT lead to some centrist “mediocrities”. But given Hillary and Trump, maybe that might not have been so bad?

 

 RCV is a step forward. But be careful what you wish for. Personally I think that redistricting to eliminate uncompetitive seats (gerrymandering) and other aspects of electoral reform are at least as important as changing the voting system. I’ve given the references before on here and elsewhere. Happy to engage in constructive discussion since NO voting system is fair.

 

 

Guardian health retirement disingenuous

So Health Editor Sarah Boseley at the Guardian is retiring from the position. At least one commenter points out that despite the good reporting she did, her little mea culpa over MMR does not excuse the frankly execrable standards of science reporting at the time, which seemed to think that “both sides deserve equal footage”. That is NOT how science works.

Thanks to search engines, one can easily find examples like this, which Sarah can’t sweep under the carpet. I remember at the time how horrific the coverage was by all media, and especially the Guardian. Some commenters try to defend her by saying that if even the Lancet got taken in, then employing someone with an appropriate scientific qualification rather than humanities would not solve the problem. Errrrr, debunking such nonsense was being done routinely on the web back then by independent sites and scientists who don’t, unfortunately, have the Islington links, and I remember the fury I felt. Ben Goldacre, although not yet syndicated to the Guardian with his Bad Science column, had been running it for 2+ years before Sarah’s frankly dreadful comment piece. I still wonder why the syndcation of his column ended in 2011.

I lived in Sydney 2009-2015 and remember seeing vaccination rates by postcode. I happened to live in the poshest postcode in Australia (renting!) and it, along with adjoining areas of “Guardian-types” had the lowest rates of MMR vaccination. I was horrified. I’m sorry Guardian, yes you did draw attention to some horrific global health issues but until you also take ownership for the standards of publication in medicine that has promoted and excerbated a sense of “we know better” among New Labour/Third Way types who consistently undervalue STEM education and buy your trashy publication that is no better than the Daily Mail when it comes to health, then I will never ever give you credit for such self-serving nonsense that promoted a worldwide rejection of science that future historians may judge rather harshly in terms of lives lost from COVID and vaccine-rejection compared to whatever good you did with AIDS etc.

Do I value you overall? No. So you went to disease zones. Some of us were desperately engaged in attempts to help educate the public so they were aware of real and non-real risks. Some of us have spent decades trying to produce better indicators of well-being that would show just how awful parts of the world are. So you did it via some nice pics, but at the expense of causing millions to have their confidence in science degraded. You are not some “Female outsider” who achieved world changing stuff by being female and not a chain-smoking less-than-alcoholic. Indeed history may well judge your PUBLISHED work as being some of the worst in human history in terms of degrading human knowledge and killing kids via lack of vaccination.

Here is what I want. I want you to hold your hand up and say “Yes, I failed to properly research vaccination. I failed to look to experts like Ben Goldacre who weren’t “in the system” but who were demonstrably cleverer. I failed to understand the basic epistemology of science and that underpinning the statistics of medical/health trials in particular.” This latter is one of the most egregious errors in medicine ever committed. For you to fail to admit your mistake here is truly truly awful. Your article is consistent with someone who shows such a basic lack of self-awareness that I am truly shocked.

I’m British and Mercian – Starmer take note if you’re going to invoke Britishness

 

BREXIT has accelerated debate over whether the UK itself should break up. Scotland may soon get a second referendum. Welsh Nationalism has increased. The New York Times predicted that Northern Ireland will re-unify with Eire within a decade. When I was an actuary the statistics suggested “sometime in the 2040s” given higher Catholic birth rates. However, although detailed census data is kept secret for a century, summary statistics are released soon after the census itself. They’re likely to show that Catholics outnumber Protestants  in Northern Ireland – in 2011 they were already close (45% to 48%).

 

 Is “Unionism” at the UK level something we on the left should fight for?

My (southern) Irish surname might suggest I want rid of Northern Ireland. I actually have family links to both sides of the debate but I hold no strong view except that of self-determination. Yet self-determination, with younger NI protestants being less enamoured with Unionism and more bothered about the basics – getting sausages, milk, a passport that gives them opportunities across the EU – may well lead to Irish re-unification soon, as the NYT suggests.

 

What I’m proposing here can accommodate NI but for simplicity I’ll assume “just Great Britain – England, Scotland and Wales”. Starmer’s “Buy British” is noble but insufficient in the face of English, Welsh and Scottish Nationalism. The left-wing cause is best served if we promote a two-pronged approach which emphasises Britishness but builds on growing regional loyalties – regions which might build upon the 12 (11 if NI is excluded) “counting regions” used in referenda.

 

 Strengthening a “left-wing/progressive” Britain

The Conservatives cannot command 50+% of the vote in a large number of Westminster Parliamentary constituencies. Yet the opposition is too fragmented and loses huge numbers of seats courtesy of First-Past-The-Post (FPTP) voting. What process might cause “all progressives” to unify behind one candidate per constituency to get a Westminster majority whose sole purpose is to replace FPTP with something if not “fair”, then “fairer”?

 

 The Citizens’ Jury

The “Citizens’ Jury/Citizens Parliament” has attracted interest and its ideas are simple:

 

·        30ish random people from the population should vote on “key topics” like “public funding”, “health”, “climate change” etc.

·        They should get to debate the competing issues raised by the facts presented to them after global experts present the facts

·        They should come to some sort of conclusion or compromise.

·        A resulting policy agenda is voted on in a referendum or via an electoral pact putting only one party candidate against the Conservatives in every seat – a “pseudo-referendum”.

 CJs can have huge advantages.

·            Random group

·            They get to listen to experts without the interference of “media outlets” who may have an agenda in misrepresenting things.

·            They should come to a conclusion that the wider public can be confident in, knowing that “people like me have been represented in the CJ”.

 How they might be problematic if not designed well.

·            Choose 30 random Brits. There’s a (surprisingly high) chance you’ll get few women, no gays, nobody BAME – nobody who is quite young or old, etc.

·            You can get a bad randomisation, just due to chance.

 

 How SHOULD a CJ be run?

 ·         LIMITED randomisation – you use quotas and randomise within quotas. Thus if female adults are 55% of adults then 55% of the 30(ish) participants should be randomly selected females. If gays are 5% of the adult population then 5% of the 30(ish) should be randomly selected gays…..etc

·         This way, the final CJ should approximately represent the wider population on all key sociodemographic variables (gender, age, sexuality, ethnicity etc). However, within all “key demographic groups” people were still selected randomly.

 THEN you can present to them. If a key subgroup has a problem, it is clear. Debate will ensue. It cannot be ignored by virtue of “there being nobody from a BAME or only one elderly person…..”

Has a CJ ever showed a change in people’s minds after experts presented?

Yes, actually. It concerns smoking. A CJ was run asking the following question “Since smokers have SELF-INFLICTED injuries (in terms of lung cancer etc) should they be “sent to the back of the queue” when it comes to treatment?

 Pre-CJ a lot of people said “Yes”. After the CJ, which involved experts showing that smoking was often a “logical short-term choice” made in response to systemic problems of poverty and stress, the CJ changed its mind. Smokers were to be treated no differently from anyone else. It is a National Health Service after all.

 So how might a CJ lead to change in the UK?

There are various topics that a “proper” cross-section of Brits might decide need reform.

  • Should first-past-the-post (FPTP) be used as the voting system for our primary chamber (the House of Commons) given that often the “winner” is rejected by 60% of the constituents?
  • Should we have a second chamber that reflects “regional identities”? People are increasingly feeling loyalty to a region. The Northern Independence Party is all over Twitter. For the highlands of Scotland Holyrood is “just as distant as Westminster”. In Wales Welsh is much more a “way of life” in the north than in the south.

So maybe Britain needs to be more like the USA’s Senate – reflecting  distinct areas that don’t have  equal population but need protection to ensure a varied country that preserves regional identities.

 Final thoughts: Aren’t we just promoting ANOTHER layer of government?

No. The remit of the CJ would be:

·          Replace FPTP with a fairer system for the House of Commons;

·          Replace the House of Lords with a Senate. It would have 12+ regions with each elected by proportional representation. Laws can only pass if no region in England, Scotland or Wales vetoes it (so no more “English dominance”). The Senate would replace regional assemblies.

·          A senate of 150 members would have 100 elected, plus 50 automatic members who are experts in fields crucial to the existence of Britain. Thus members of SAGE, the chiefs of various technical societies and other experts are automatic members. Totally democratic? No. But do you want the best plumber or the one who gets the most stars on some stupid website?

 This “power to the regions” – to be delegated via a written constitution that forbade Westminster from “taking the powers back except via votes akin to American Amendments to the Constitution” would be intended to replace, not augment regional assemblies.

 Clearly, this would be presented by the media as a “power grab” intended to weaken Wales and Scotland. Yet if the Senate had veto power over key issues (national finance, environment etc) then never again could England impose its will upon Wales and Scotland (and NI if it sticks around) if even one region in any of the three (four) said no. That is real local power. Plus, unlike the current devolved institutions, if part of a constitution drafted by a CJ is voted through via constitution, Westminster can’t simply take back the powers. I’ll bet people start voting more often.

 I’m British and Mercian. Maybe that’s the kind of thinking everyone should adopt.

 

DCE references

I’ve promised academic references for certain statements in last few blog entries. Here they are, numbered according to numbers in the articles elsewhere:

[1] Specification Error in Probit Models. Adonis Yatchew and Zvi Griliches. The Review of Economics and Statistics: Vol. 67, No. 1 (Feb., 1985), pp. 134-139 (6 pages) – THIS PAPER SHOWS WHY YOU MUST ADJUST FOR DIFFERENT VARIANCES BEFORE AGGREGATING HUMANS ELSE YOU GET BIAS NOT SIMPLY INCONSISTENCY. RELEVANT TO LOGIT OR PROBIT MODELS.

[2] Combining sources of preference data. Journal of Econometrics. David Hensher, Jordan Louviere, Joffre Swait. Journal of Econometrics: Volume 89, Issues 1–2, 26 November 1998, Pages 197-221- THIS PAPER SHOWS THEORETICALLY AND EMPIRICALLY WHY YOU MUST NET OUT VARIANCE DIFFERENCES BETWEEN DATA SOURCES (INCLUDING SUBJECTS) BEFORE AGGREGATING THEM.

[3] Confound it! That Pesky little scale constant messes up our convenient assumptions. Jordan Louviere & Thomas Eagle. USER-ACCESSIBLE EXPLANATION OF VARIANCE ISSUE IF [1] AND [2] UNAVAILABLE.

[4] Best-Worst Scaling: Theory, Methods and Applications. Jordan Louviere, Terry N Flynn, Anthony AJ Marley. Cambridge University Press (2015).

[5] The role of the scale parameter in estimation and comparison of multinomial logit models. Joffre Swait & Jordan Louviere. JMR 30(3): 305-314.

 

Most_least Voting(2)

Most-Least Voting – Questions raised – some of which were serious, some I suspect were “rabble-rousing”. I’ve edited to reduce snark and generally tried to give benefit of the doubt, even though I know some people really should just go out more……

Arrow’s Theorem only applies to generic voting. Fair results can be obtained if particulars are taken into account. When you only have a few candidates MLV is not what you’d go for. With a huge pool of eligible candidates, say 1000, all available for say 9 seats, then Cumulative vote tallying is ideal.

Reference please.

“Also something polsci experts often fail to consider is degree of polarization. You don’t have to have just “like” vs “dislike”, you can have a Likert scale on degree of like/dislike, and use it to weight the votes, so that a polarizing candidate who is less polarizing than the other still has a chance to be ahead of the milquetoast centrist. I know, I know, requires fairly sophisticated voters, but worth a shot some time in experimental research trials.”

Likert scaling assumes distances between each choice (answer option) are equal. Please provide references from the mathematical psychology literature showing this to be true. (I’ll save you time – there are none. My co-author was editor of the top journal –JMP – for almost 40 years and never encountered a study showing this. He is AAJ Marley.).  I could quote you amusing anecdotes like the fact traditional Chinese older people associate the character for number 4 with death so avoid it. Statisticians then spend yonks trying to work out if dips at number 4 are “real” or “due to cultural stuff”.  Please stop throwing up new terms like “likert” when it is merely expressing a phenomenon I discredited in my postings before.

San Francisco city government, supervisors, sheriff and district attorney are chosen by ranked choice voting. That, combined with district elections for supervisors, has resulted in a parade of ineffectual, sometimes dangerous, political mediocrities, a chaotic disaster, controlled by the Democratic County Central Committee. If a voter fails to choose three candidates, their vote is thrown out.

You say ranked choice choice voting – I’m not defending that – so your point is?

Some supervisors have been elected with less than 25% of the vote.

Choose from Hillary, Trump and any run of the mill US politician in the centre. Why does LESS THAN 25% “MEAN THEY ARE ILLEGITIMATE”?  – “Top” candidatees don’t matter under MLV if they also disgust a huge number of the rest of the population. This is NOT ranked voting (which YOU talk about). Please actually address my discussed voting system and don’t straw man.

It’s horses for course to get around Arrow. In other words, you select the most appropriate voting system for the size of the candidate pool and the seats being vied for.

I said your latter statement at the start. Why are you presenting this as a “new insight”? Arrow always said you make your moral judgments, based on “values” and the “system”, THEN you can choose the system that best achieves these. As to “get around Arrow”. Nope.

While it is an interesting fad, there is no real guarantee that rigging elections to favor centrists will get you better government. As it happens, I am a Libertarian. Some of my ill-advised fellow party members argue vociferously for ranked choice voting or the like. I attempt to point out to them that RCV tends to guarantee that my party will never win elections, but the RCV faithful will not listen.

Where did I say that MLV rigs elections in favour of centrists? I merely quoted an observation from the Dutch/Belgian researchers that centrists probably stand a better chance of being elected. If you have data showing that MLV disproportionately benefits centrists at the expense of others please quote it – PARTICULARLY in a multidimensional format (which even the continental Euroepan authors do not). Note I also said that in a MULTI-DIMENSIONAL world, the concept of a “centrist” is less meaningful. MLV could get you your libertarianism (in getting govt out of the bedroom). Please stop putting words into my mouth.

There’s a lot of talk about candidates and parties, but not a lot of talk about policy.

One way to create significant momentum to deal with global climate change is to place high taxes onto fossil fuels. As Illinois recently demonstrated, this is highly unpopular.

In either Ranked Choice or Most-Least systems, how do necessary but unpopular policies get enacted?

I’m not going to claim miracles. Just as under ANY other voting scheme, there must be a critical mass of people who “see the peril” and vote accordingly. MLV at least allows these people to “veto” candidates who totally dismiss the environmental issues. So it isn’t “the solution” but it may be “ a quicker solution.” One big benefit of MLV is that it is probably the system that gives the greatest “veto power” to any majority of the population whose candidate(s) didn’t make it into government. So in the UK, the strong environmental lobby crossing all the “progressive parties” who keep losing elections could start exercising real power via their “least” votes.

Labour voting reform?

So. Zoe Williams has thrown the cat amongst the pigeons with a piece attempting to predict the result of the Labour leadership election. She has interesting insights, some obvious, some perhaps less so. One that most would consider obvious is:

There are some known knowns: Thornberry, if trends continue, won’t make the ballot. 

I agree. So we’re probably down to a three horse race, Long-Bailey, Nandy and Starmer. As Williams points out, actually it is pretty difficult to pin down at least the latter two regarding their “true values”. What they’re saying during this campaign is not necessarily the best guide. I’m someone with a multi-decade career in examining preferences; looking at revealed preferences – what a person has DONE ALREADY, is often (though far from exclusively) the best way to understand what they value. Thus Williams has attempted to look at their votes in Parliament, among other actions. It’s still an uphill task but she should be admired for trying.

Her main conclusions:

  • Long-Bailey won’t finish third in the first round of voting so won’t be eliminated;
  • In circumstances where she HAS come third, her supporters’ preferences have almost all gone to Nandy;
  • Supporters of Starmer almost all put Nandy as second preference;
  • Supporters of Nandy almost all put Starmer as second preference.

For those of you who know my background – co-author of the definitive textbook on Best-Worst Scaling, you probably have had the “aha” moment already. However, for others, I’ll guide you through something I freely thought was probably more of a “theoretical curiosity” than a real possibility in a real election. I’m quite fired up!

IN SHORT:

  • UNDER THE (SEMI-PROPORTIONAL) EXISTING ALTERNATIVE VOTE (RANKING – SPECIAL CASE OF SINGLE TRANSFERABLE VOTE) SYSTEM, STARMER WILL LIKELY WIN;
  • UNDER A HYPOTHETICAL (VERY NON-PROPORTIONAL) FIRST-PAST-THE-POST SYSTEM (CURRENTLY USED AT WESTMINSTER), LONG-BAILEY WOULD LIKELY WIN;
  • UNDER ANOTHER SEMI-PROPORTIONAL SYSTEM – MOST-LEAST VOTING – NANDY WOULD LIKELY WIN.

So, using the same set of rankings, 1, 2 & 3 from every Labour voter, we could have ANY ONE OF THE THREE LIKELY CANDIDATES WIN, DEPENDING ON THE VOTING SYSTEM. Voting enthusiasts will have likely watched “hypothetical” cases on YouTube etc showing artificial data that could do this. But I genuinely believe, if Zoe Williams is correct, that we might be about to see real data in a real election that demonstrate this phenomenon!

Before I go any further, I shall make two things clear:

  1. The Labour leadership voting system is established. It is ranked voting (Alternative Vote). This is merely a thought exercise intended to spur debate about the Labour Party’s policy for electoral reform at Westminster.
  2. Plenty of people have discussed the existing FPTP used at Westminster, and AV (as a “compromise” measure between FPTP and “full Proportional Representation – PR”). Unfortunately AV lost the referendum on electoral reform – badly – several years ago. Thus I want to illustrate another “semi-proportional compromise” that might prove more acceptable to the British public – Most-Least Voting.

2020 LABOUR POSSIBLE RESULT

CLICK ON IMAGE TO ENLARGE.

Understandably, there are NO hard data on the relative percentages of support for the (assumed) three candidates likely to qualify for the final round involving the general membership of the Labour party. I’ve used some, I hope not unreasonable, guesstimates, based on YouGov figures (when there were 4 or 5 candidates, but the bottom two together accounting for <10%).

THESE FORM THE FIRST PREFERENCE (ROUND ONE) PERCENTAGES FOR THE THREE CANDIDATES.

For round two, it is pretty obvious Nandy will be eliminated.

The question becomes, “To whom do her supporters’ votes go?”

For that, I use the information from aforementioned article by Zoe Williams in the Guardian. Unless Long-Bailey has a MUCH bigger lead over Starmer than seems reasonable at the moment, and unless Zoe’s finding that Nandy’s supporters are likely to put Starmer as 2nd is totally wrong, then Starmer is likely to win after redistribution. End of story. RED FIGURES.

Under a hypothetical First-Past-The-Post (FPTP) system (as used for Parliamentary constituencies at Westminster) there’s a good chance Long-Bailey would have won – with probably a plurality but not majority (i.e. not >50% but beating the other two). BLUE FIGURES.

Most-Least Voting (GREEN FIGURES) may require some exposition since only regular readers will be familiar with it. M-L voting works on the principles that:

  • Voters MUST, for a “valid, non-spoilt ballot”, indicate their MOST PREFERRED and LEAST PREFERRED candidates.
  • So a voter gives just TWO pieces of information, “most” and “least”. Clearly, as the number of candidates increase, this becomes MUCH easier than AV, involving a “full ranking”. Oodles of research (see aforementioned textbook co-authored by me) demonstrates that people are TERRIBLE at full rankings and that this has VERY REAL problems in terms of producing a candidate that is mathematically “the best” – the statistical rules that MUST hold for the AV algorithm to “work” almost never hold. This might be why Aussies are increasingly disliking AV – I lived there for 6 years and saw it in action. Believe me, I’ve seen the stupid results it can produce.
  • Now, with only three candidates, like here, giving ranks 1, 2 & 3 seems identical to just selecting “most” (rank one) and “least” (rank three). Yes, the information is identical – ASSUMING THE FORMAT OF THE QUESTION HAS NOT INDUCED DIFFERENT “GAMING OF THE SYSTEM i.e. tactical voting.
  • What differs is the MATHEMATICS OF WHAT IS DONE WITH THESE THREE RANKINGS.

Under M-L voting, “more weight is given to people’s degree of dislike” – to be more precise, THE EXACT SAME WEIGHT IS GIVEN TO WHAT THEY LIKE LEAST/HATE AS TO WHAT THEY LIKE MOST/LOVE. This doesn’t happen under AV. Why? And how?

  1. “Most” votes for each candidate are added up, just as under FPTP.
  2. “Least” votes for each candidate are added up, separately.
  3. Each candidate’s “least” total is subtracted from their “most” total.
  4. This produces a “net approval score”. If it is positive, on average the candidate is “liked”, if negative, on average “disliked”.
  5. The candidate with the highest net approval score wins.

Note some important properties of M-L voting:

  • If you have majority support (>50%) then it becomes increasingly difficult to “knock you out” – so the British people’s oft-stated desire for “strong single party government” is not sacrificed, merely made a little more difficult.
  • For those (LOTS) of candidates in British elections winning with a plurality but not majority (i.e. winning but not obtaining 50+%), often getting low 40s, then they have to be a LOT more careful. The opposition might be divided upon their “preferred” candidate, but if they all agree you have been obnoxious to their supporters they will ALL PUT YOUR CANDIDATE AS “LEAST PREFERRED”, PUSHING THAT CANDIDATE INTO NEGATIVE TERRITORY. He/she won’t win.
  • The strategy, if you don’t have a strong majority in your constituency, is to offer a POSITIVE VISION THAT DOESN’T ENGAGE IN NEGATIVE CAMPAIGNING AGAINST YOUR OPPONENTS. Candidates who are “extreme” without being constructive LOSE.

So, what’s the relevance for the Labour leadership contest? Well, if it is true that Long-Bailey and Starmer do indeed “polarise” Labour supporters, each having a relatively large, passionate body of supporters who are ill-disposed toward the other, then electing either one could prove toxic for Labour when facing the Conservatives. Maybe the candidate who “comes through the middle by alienating virtually nobody” might be better?

As usual, I will mention Arrow’s Impossibility Theorem. There is NO FAIR VOTING SYSTEM. You must decide what are the most important targets for your system, then choose the system that best achieves these. You won’t achieve EVERY target. However, you can achieve the most important ones.

So Labour, what do you want?

  • The possibility of single-party power but, given current population dynamics, something that seems a LOT more difficult than it was in 1997 under Blair, or
  • A system which preserves the single-member constituency and which cannot be fully proportional, but which is semi-proportional and which is very very close to FPTP…..maybe close enough that it would WIN a referendum, unlike AV?

MOST-LEAST VOTING WOULD NOT BE FULLY PROPORTIONAL. BUT, GIVEN A DESIRE FOR SINGLE -MEMBER CONSTITUENCIES, IT WOULD BE SEMI-PROPORTIONAL, THUS:

  • The percentage of seats in the House of Commons per party would be MUCH closer to their percentage of the popular vote;
  • HOWEVER, it would NOT be EQUAL. Parties offering popular manifestos that did not vilify others and which commanded support in the 40-something range, could STILL get an overall majority in the House of Commons.
  • Whilst those in favour of “full PR” could still complain, I’d argue this is a pragmatic compromise between two fundamentally incompatible aims – Proportionality and Single Member constituencies. Furthermore, if a majority government DOES emerge, it’s unlikely to have done so via vilifying minorities. There will be no tyranny of the majority.

FINALLY, PLEASE DO NOT TAKE ANY OF THIS AS AN ENDORSEMENT FOR A PARTICULAR CANDIDATE. I AM IN NO WAY SEEKING TO “CHANGE THE RULES” TO GET WHO I THINK WOULD DO BEST. I MERELY USED THE LABOUR CAMPAIGN AS AN EXEMPLAR BECAUSE IT HAS SOME (SORT OF) REAL NUMBERS THAT MAKE THINGS INTERESTING!!!!

I think there’s debate to be had here.