Category Archives: Complete Blog

Please note: None of the ideas expressed in this blog post are shared, supported, or endorsed in any manner by my employer.

Leaving Labour

Unfortunately I’m probably leaving the Labour Party. It might seem odd, given that Labour have recovered massively to now be level-pegging with the ruling Conservatives, but I don’t think this is good enough to gain power, nor good enough in terms of policy.

Here is a fairly heavily edited version of my resignation explaining my decision, in response to their nice email asking why:

 

Dear [membership people],

 

Thank you for the email. I’m leaving for a few reasons – I stress that I don’t think Labour are “in dire straits ….. YET”, nor do I think they picked the wrong leader at all. My reasons below relate to my (in many cases professional) knowledge of the electorate and how I don’t feel Labour is catering to the correct groups in order to gain power. I can’t give money when I don’t feel the party really engages with or stands up for “people like me”.

 

1. What about Generation X? Due to the arrival of covid-19 I only attended one meeting in person – the one around xmas time. Frankly it was a little depressing that of the three “electoral groupings” that are of relevance to gaining power (older voters, gen X – people like me, and young’uns – millennials etc) I saw plenty of the first and third but wasn’t sure I saw another Xer like me at all.

 

https://www.cambridge.org/gb/academic/subjects/economics/econometrics-statistics-and-mathematical-economics/best-worst-scaling-theory-methods-and-applications  is my “professional case” for talking about voting groups like this. I ran a survey just before the 2017 general election. I KNEW May would lose her majority. Nobody was interested. So I just went to the bookie and bet on it and made money. I KNOW what needs to be done to regain the red wall etc. I’m gay and out. But the noisy rubbish the young’uns go on about in terms of arguing about groupings/language etc just seems like the early 1980s fights over “who is really proper left-wing” all over again. I and my contemporaries are NOT interested in all that social media fighting. Those morons don’t bother to turn out to vote anyway. (Yeah I dislike Millennials. But I taught them so I am not uninformed about them.) We Xers want issues like the economy addressed since we’ve been cheated. Without us you won’t rebuild that wall.

 

2. Triumphalism and a belief that Labour are now close to power. Whilst I did put Keir as first preference, I felt a little bit put off by attitudes among Labour senior members that he was “the cure to a cancer”. Corbyn was (in my view) demonstrably the wrong leader…..but his policies were all popular and now is NOT the time to “tack centre-wards” and give the impression Labour is now Blair 2.0. I LOATHE Blairism and so do most Xers – it screwed us and we now all see our mistakes in supporting it 20 years ago. If Labour stands for that then you’ve not only lost my membership, but you’ll lose my vote.

 

3. Electoral reform. Again, my book is the reference here. Stand for reform. To another system. Frankly ANY system apart from FPTP. You’re probably only going to get one more chance at this (I think Labour will win in England and Wales only one more time). You’ve lost Scotland – the SNP is pretty close to old Labour. You could be losing Wales. Last chance saloon time.

My overall feeling is that local officials give the impressions that “the adults are back in charge – with adults meaning Blairites”. Ugh. Also, for a QC, Keir disappointed me in his response to Covid-19. He should have been ready to argue for lockdown 2.0 two months ago. The studies are out there (my PhD was in medical stats so I know what to read). Even asymptomatic people with it could be getting permanent heart damage. Keir really was too collegiate with Boris and missed a trick there. It worries me. I think he needs some better advisors who read epidemiology more widely. Yes he’s miles ahead of Boris. But that’s not enough if 15% of covid infectees get heart failure in 15 years. Take the gloves off.

 

I hope this doesn’t sound rude. As a former academic I often “default” to critical mode as when refereeing etc! I do wish to also emphasise that local Labour has people who work exceptionally hard and are very dedicated and made my decision difficult – my local councillor xxxxxx would be top of that list – he actively helped my Dad’s company when we tried to get our PPE registered during the early stages of covid-19 after we retooled production. But there are many others.

 

To clarify one point: I do understand (having encountered it) puzzlement from people as to why I and other gen Xers don’t value what Blair did. Permit me to give one example literally from today to illustrate why his continuation and expansion of PPPs and other schemes in line with his explicitly anti-clause 4 agenda was deeply wrong. I know someone working in admin at the City hospital. That person was today told to do cleaning of disgusting covid-ridden wards so a “surprise” inspection would give the Trust a pass (and then was explicitly denied a covid test). I can only assume the contracted cleaners don’t keep the wards clean enough and no emergency contract was possible to get them to sort out the mess they left. Various admin staff (who should NOT be anywhere near covid) up to senior management were on hands and knees doing this. It was DISGRACEFUL.

 

If anyone needs a concrete example as to why I won’t vote for parties that endorse contracting out, this is one. My MSc dissertation was a proof of why they’re not only wrong but economically inefficient. The “Third Way” is a mirage. Today I truly felt my stomach drop after hearing what is going on at the Nottingham City hospital.

 

Kind regards and thank you for reaching out to me and for the work you’ve all done recently.

 

Terry

 

 

Terry N Flynn PhD

 

www.terryflynn.net/

https://www.cambridge.org/gb/academic/subjects/economics/econometrics-statistics-and-mathematical-economics/best-worst-scaling-theory-methods-and-applications

 

human choice

Random Utility Theory

 

I have left the field I made my name in during my postdoctoral work – random utility theory. The reasons are not ones I will go into here. However, I thought I’d write a post – maybe my final word on the subject – attempting to explain it and its importance.

 

Let’s split the discussion into a few sections:

  1. What the economists (and hence most people think) about how humans make decisions;
  2. How and why a model from psychology developed in the 1920s moves us so much further forward, but suffers from a problem whereby its predictions, under certain circumstances can “look so much like economics that economics gets a pass”
  3. How a better knowledge of how this psychological theory can and cannot be used, could help us immensely.

 

  1. What the economists (and hence most people think) about how humans make decisions

 

Economics generally splits into two branches – microeconomics (seeking to understand and explain how individuals make choices) and macroeconomics (seeking to understand how entire economies might work or not work). We are dealing strictly with the former (although where this “carries through” to the latter will be touched upon).

 

In microeconomics, humans are assumed to be what has come to be called “homo economicus” – a somewhat “computer-like” entity that works out what provides the most “utility” (what makes you best off) and causes you to choose the thing or path that maximises the chances of getting that. EVERY TIME. So if we had a science-fiction “multiverse” of 1000 parallel universes, all identical up to the point of choice, our “key human” makes the SAME choice in all 1000 universes. If you observe something different in one of these universes it’s due to some error on the part of the OBSERVER’s ability to observe everything that matters, NOT the observed human. That’s the “economics interpretation of Random Utility Theory – RUT”.

 

Psychologists have, following observations of REAL humans in a variety of contexts for 100 years come to a different conclusion. They too observe that our “key human” might do something different in one or more universes. BUT they believe this is due to some inherent property of HUMAN BEHAVIOUR. Namely that whilst our “key human” has a mean (average) tendency to “perform action x”, there is a VARIANCE associated with the behaviour. Thus, on AVERAGE across our 1000 universes we’d certainly see our key human “do x”, there are a number of universes in which he/she does “not x”.

 

This variance might be very very small – this typically happens when “action x” taps into something like an intrinsic attitude – think about views about things like abortion. The variance might be large – think about something our human really isn’t very sure about (typically due to lack of experience like a new feature of a mobile/cell phone). OR maybe it’s something in between – like brand of baked beans, in which our key human has a definite preferred brand but “random factors” can cause a change in brand bought for no discernible reason.

 

Why does the “choice of discipline matter?” For that, we’ll go to section 2, dealing with the philosophical underpinnings of the models and why statistics can’t tell us which is “right”

 

  1. How and why a model from psychology developed in the 1920s moves us so much further forward, but suffers from a problem whereby its predictions, under certain circumstances can “look so much like economics that economics gets a pass”

 

Here is the basic problem in understanding whether the economists or the psychologists are correct. In most cases their predictions (and the maths) are observationally equivalent. In other words, there is no test we can administer that will give result “A” if the human is homo economicus, or result “B” if the human is “homo psychologicus”. The “right” model comes down, a lot of the time, to issues like philosophy and epistemology – how you think about the world. Now, there is a growing body of evidence – based on MRI and other medical research, huge amounts of observation and other fields, that suggests the psychologists are probably closer to the truth than the economists. Indeed it is interesting that the economists are the ones who keep having to “amend” their theories to allow for problems like “intransitivity” – when I prefer A to B, B to C, but C to A. That SHOULD NOT happen in a well-designed experiment if homo economicus reigns supreme. But the psychological model has no problem with this because it is PROBABILISTIC – THERE WILL BE OCCASIONS IN WHICH I DO THIS AND THAT’S FINE – WE ARE NOT COMPUTERS.

 

LL Thurstone in the 1920s developed the “psychology version of RUT” basically positing that (for example) you have a mean (average) position on a latent (unobserved) scale for something like “political preference – left/right” but that you have a variance and in (say) 20% of universes you will vote Republican despite the fact your “average” position is on the democrat side of the scale.

 

McFadden (coming from economics) saw the implications of Thurstone’s work, and using some slightly unconventional “tweaks” to economics (to overcome the problem that we’re not homo economicus aka computers) used it to win the so-called-economics-nobel prize for successfully predicting demand for a new light rail system that HAD NOT YET BEEN BUILT. However, there remains controversy to this day as to whether his success came more from psychology or more from “tweaking economics” to be like psychology.

 

So how does a more “psychological mindset” (as opposed to homo economicus one) help us? Can we make better predictions without having to keep “tweaking economics” as has been happening with more and more frequency via things like “behavioural economics” and its tools such as “nudge theory” etc?

 

  1. How a better knowledge of how this psychological theory can and cannot be used, could help us immensely

 

To cut to the chase, the psychology version of RUT means that in 1000 universes we may make a “particular discrete choice” (e.g. “vote Democrat”) 800 times but do otherwise in the other 200 universes. The trouble is we only observe ONE universe. So what do we do? This is where discrete choice modelling comes in. If we can design cunning experiments that:

  • Get the respondent to keep making essentially the same choice a number of times BUT
  • Don’t LOOK like the same choice (thus alerting them to our “subterfuge” and therefore encouraging them either to “keep choosing the same” – to “look good” – or “start switching” – to “be awkward”)

Then we can estimate CORRECTLY “how often our human votes democrat” (for instance).

Sounds great, hey?

In theory, if you solve the practical problems mentioned above, you certainly can do this. Unfortunately there is a mathematical problem we simply can’t get past.

Here is the problem, expressed first mathematically, then intuitively:

  • ANY probability observed in repeated experiments tells you NOTHING about the mean and variance: mathematically they are PERFECTLY CONFOUNDED (mixed). Thus you CANNOT know if ANY observed choice probability represents a mean effect, a variance effect, or somewhere in between
  • In short, someone choosing “democrat” might be doing so because of strong affiliation, weak affiliation but with the “majority of the distribution” of their support being to the left, or something in between

SUPPOSE OUR HUMAN VOTES DEMOCRAT 80% OF THE TIME. UNFORTUNATELY THIS IS NOT ENOUGH TO TELL US THE (1) MEAN AND (2) VARIANCE ON THE “LATENT SCALE OF POLITICAL PERSUASION”.

This insight was proven in a key mathematical statistical paper in the mid 1980s. In short:

  • You might get 800 out of 1000 “vote Democrat” outcomes because the person genuinely believes in 80% of the Democrat manifesto (the “mean” is 80% but variance is zero);
  • You might get 800 out of 1000 “vote Democrat” outcomes because the person genuinely believes in the Democrat manifesto (the “mean”) some other number (70% or 90%) but the variance (lack of certainty in the “Democrat candidates) is sufficiently high that we “See” 800 Democrat successes but this is NOT a valid representation of the person’s “position on the latent political scale” – they might be MORE or LESS Democrat…….but UNCERTAINTY (variance) effects cause the actual observed chance of putting a checkmark in the Democrat box to be 80%
  • You might get 800 out of 1000 “vote Democrat” outcomes because the person is actually much more closer to the 50/50 mark (in terms of MEAN position on the politics scale) but a large variance (degree of uncertainty) in THIS election led to a large number of “Democrat votes” in our hypothetical multi-universe elections.

 

So where does that leave us?

In trouble basically – it means that ANY OBSERVED FREQUENCY OF CHOICE IS CONSISTENT WITH AN INFINITE NUMBER OF EXPLANATIONS…….RANGING FROM:

  • Mean effects – the person would “always” go that way
  • Variance effects – the person “happened” to show an 80% level of support but this was largely because their support is so “soft” that the number of universes in which they “go Democrat” could vary dramatically
  • A “mix” – this is the most likely – the person has an inherent “affiliation” with a party but could “swing” to another under the right conditions

 

Clearly, if we run “enough” trials, where we cunningly change things in a way that we understand is more likely to identify a “mean effect” or a “variance effect” then we can begin to understand which of the above three worlds we are in. I did this in the 2017 UK General Election. I beat the polling organisations and bookmakers and made a profit – it was small (since I’d never used my model in elections before and wanted proof of concept) – but I showed it could be done.

 

How did I do this? Well, I realised that stats programs are wrong. Here is what happens when turning a “political affiliation” into a vote, and then, using a STATS PROGRAM to do VICE VERSA:

  • The program uses a distribution (probit or logit) to turn a “% level of support for the Democrats” into a “discrete choice” – a CONTINUOUS outcome is turned into a discrete one – and thereby information is LOST
  • If we go vice versa (As we do in ANY election prediction) we must make assumptions about “how much variation is a mean effect and how much is a variance effects”. ALL stats programs set the variance to be equal to one (they “normalise” the variance).
  • If the variance is NOT the same across groups of voters then you predict incorrectly.
  • I, AND YOUGOV, realisised that the “variance” (consistency of response) was predictable via attitudes. Thus in 2017 UK General Election the “alternative” model of YouGov worked well – that model, AND MINE, beat all the main models. I was just sad that YouGov measured attitudes using Rating Scales (notoriously unreliable). Thus their model didn’t really work in 2019 and will probably get binned entirely – which is a shame – they’re on the right track.

 

 

So, IF we understand, via attitudes, something about variances, we can “make the stats program adjust the variances to be correct – and NOT one”. Then predictions will be better.

 

I’d love to see this happen but I’m not sure it will……but if you do it right then aggregation (i.e. the macroeconomics) will suddenly predict well.