Tag Archives: logit

Perils of discrete choices in vax and surveys – SPOILERS FOR AVENGERS ENDGAME

NB: Edits done between 16th and 18th June 2021 for clarity and explain certain statistical concepts. Key references as of 18th now included, which link to post here.

Warning – for some potential LULZ I’m going to use the Marvel Cinematic Universe (MCU) as a way to explain variability in outcomes. If you haven’t watched up to and including Avengers Endgame and don’t want spoilers go watch them first!

I find myself having to explain why “this” or “that” study (frequently survey based, but sometimes a clinical study) has results that should be regarded as deeply suspect. I hope to provide some user-friendly insights here into why people like me can be very suspicious of such studies and what you as the average reader should look out for.

To aid exposition, I’ll do the old trick of “giving the punchline first, so those who don’t want the mathematics and logic can skip it”.

In many contexts you get a “one shot” for an individual – “live/die” or “cure/non-cure” or “Democrat/Republican/Libertarian/Green” in a general election etc. You don’t know the variability. Would that person – call her Mrs Smith – have displayed the same result in 14,000,604 other parallel universes which in key respects are the same as ours? MCU fans will instantly recognise this number. Spoiler alert – We’re getting into the territory of sci-fi and the Marvel Cinematic Universe here! When I say “variability” I am SPECIFICALLY referring to the variability ONE person – Mrs Smith – exhibits. Would she ALWAYS do “action p”? Or would she, in different universes, do “action q”?

Unfortunately we observe actions in ONE universe. We don’t in reality OBSERVE the “other” 14,000,604 universes – we are not Doctor Strange. So we don’t know. So scientists assume “what we observe” is, in some sense, the TRUE effect. For the statistical minded who want to say “GOTCHA – we have standard errors and confidence intervals etc to handle uncertainty” I’ll reply, go down and read the asterisked section because we clearly are NOT on the same page here.* (TL;DR – that “variability” says NOTHING about what Mrs Smith’s variation is in different universes but is based on variation ACROSS SUBJECTS who might be LIKE Mrs Smith. The two are potentially VERY VERY different.)

This can lead to spectacularly bad outcomes but scientists defend themselves by saying “we had no choice! You can’t have a do-over like in the MCU and see if the patient dissolves to dust under a different stimulus!” Calculating the “variation intrinsic to patient Smith” is impossible. I’m here to discuss how, if we’re really interested and willing to put resources into it, we CAN find out the variation in many circumstances (though not a Thanos or other “death” event) – DID A “RESPONSIVE” PERSON RESPOND BECAUSE THEY ALWAYS WILL OR DID WE SEE THE ONE OUT OF 14,000,605 OCCASIONS IN WHICH THEY DID? DID THE SET OF STIMULI REQUIRED TO REVERSE THE SNAP (KILLING HALF OF ALL LIVING THINGS) – ONE OUT OF 14,000,605 – REQUIRE IRON MAN TO DIE? (UNFORTUNATELY, YES). AVENGERS ENDGAME IS NOT JUST POPCORN SCI-FI BUT A SUBTLE AND CLEVER EXPLANATION OF DISCRETE CHOICE MODELLING.

To start this discussion we’ve got to go back to stats 101. Heteroscedasticity. You’ll have been shown it in a graph probably. As “x” values increase, so do “y”. However, the RANGE of values for y increases – you see a “funnel” with the tip pointed towards the lower left and it getting wider as you move toward the upper right. If you do a least squares regression (to explain y values from a given x value) you’ll get the right “average” answer. However, your measure of “certainty” or “variability” – the standard error and thus confidence interval – will be wrong. This is what I’ll call a “nuisance”. Your “main answer” (vector of betas, showing the average effect of an explanatory variable on the outcome) will be correct on average, but your level of confidence will not and needs to be adjusted. There are methods to make this adjustment, so all can be well. Continuous outcomes (like GDP, blood pressure, etc) are easily analysed using such methods. “Discrete” (one out of two or more) outcomes (yes/no……die/live……Democrat/Republican/Libertarian/Green) are not. [1,2,3,5] But here’s the important takeaway I’m going to explain – heteroscedasticity in models with continuous outcomes (GPD/Blood pressure etc) is just a “nuisance”. Heteroscedasticity in limited dependent variable models (yes/no……Democrat/Republican…..etc) is a FUNDAMENTAL PROBLEM CAUSING BIAS, not just “standard errors that must be tweaked” – you usually DON’T KNOW THE DIRECTION OF THE BIAS, LET ALONE ITS MAGNITUDE. YOU ARE ROYALLY SCREWED.

 

Discrete Outcomes and Their Problems

Discrete outcomes are generally coded (0,1) or (0,1,2) or (0,1,2,3) etc. These numbers usually have no intrinsic numerical meaning. They could be (a,b) or (a,b,c) or (a,b,c,d). However, the point is, to adequately understand “what is going on underneath” you usually need something that links a discrete outcome to some (albeit hypothesised) latent (unobserved) continuous numerical scale. Let’s consider 1950s America when party allegiance generally was on a single left/right dimension with “Democrat” arbitrarily chosen as the “positive” end of the scale and “Republican” as the negative. Reverse these if you want – it makes NO difference to my argument.

Mrs Smith has an underlying position on the latent “party allegiance scale”. If it is positive, she is more likely to vote Democrat and vice versa. Note she is NOT guaranteed to vote one way or the other. I’m just saying, if she is “strongly positive” then the chances of her switching to the Republican Party are small, but not zero. As her position on the “party allegiance scale” zooms up to positive infinity, the chances of her voting Democrat asymptotically approach one (but NEVER get to one).

There needs to be a “link function” to relate a (hypothesised) position on the latent party allegiance scale with a discrete choice (let’s keep it simple with just Democrat or Republican for now). Different academic disciplines prefer different functions – some prefer those based on the normal (Gaussian) distribution whilst others favour those based on the logistic distribution. In practice it makes practically no difference to the answer. Those of us who like the logistic function do so because it is a “closed form” function when there are 3+ outcomes – in other words there is a mathematical formula that can be “maximised” using an established technique. The multinomial probit doesn’t – you have to “brute force attack” it. Doing brute force means you must make sure your “peak” is not the tallest mountain in the Appalachians but is, in fact, Mount Everest. The logistic “maximisation routine” can make the same mistake but it’s generally easier to spot mistakes and get to Everest first time.

The link function relates a position on the latent party scale to a discrete (Democrat/Republican) outcome. What in fact it is doing here is doing the reverse – OBSERVING Mrs Smith’s voting and (via the link function) INFERRING her position on the latent party allegiance scale.

How do you infer a position on a continuous scale from a (0,1) response (Democrat or Republican) in traditional logit/probit studies?

You typically look at the “other factors” defining Mrs Smith, her circumstances etc and most importantly, draw strength from OTHER people in your sample. See the problem? You are already making inferences on the basis of people who are NOT Mrs Smith. This leads directly to the issue of “variability” and so I’ll put the dreaded asterisk in for you stats bods.* We are inferring information from people “like Mrs Smith”. Just because they’re female, similar age, sociodemographics etc, does NOT mean they are useful in placing Mrs Smith accurately on the latent scale. You really need to see what she’d do in various scenarios, NOT what people “seemingly like her” do.

So, this process of making inferences is incredibly dangerous unless you have seen Mrs Smith’s choice/outcome behaviour under a bunch of different scenarios. If you have merely observed her in the ONLY ONE out of 14,000,605 universes where she voted Democrat then you’ll vastly over-estimate her loyalty to the Democrats. If she in fact votes democrat in all 14+ million universes then you’re on more solid ground. The point is, YOU DON’T KNOW (unless you’re Doctor Strange).

As it happens, Dr Strange looked at 14,000,605 universes. He saw that there was one, and ONLY one “intervention” that could lead to victory (Thanos being defeated and Earth being restored). Unfortunately the “intervention” required Tony Stark to die. He held up one quivering finger at the crucial moment in the movie to show that the “Endgame” required Tony to act. Tony was clever enough to know what this meant. Which is partly why the movie is so family-blogging great. The Russo brothers, directing, knew what they were doing. Tony “did action x” and the one outcome that we “needed” out of a possible 14,000,605 outcomes came to pass. And he died. But Earth and the universe was restored.

Discrete Choice Modelling (Discrete Choice Experiments – DCEs) As a Solution

How do we escape the above conundrum? In effect, we try to simulate some “key universes” from the 14+ million that allow us to better understand “how committed Mrs Smith is to the Democrats”. We do this via a Discrete Choice Experiment or DCE.

How, broadly, do DCEs work? In essence you vary all the “key stimuli” that influence a decision in some pre-defined, statistical way way as to “simulate” “key universes”. Then see what the person does in each one. You can then quantify the effect each stimulus has on that person’s utility function. Note carefully. I say “that person”. The optimal DCE can be done ON A SINGLE PERSON. You map their “demand surface” (multi-dimensional version of the classic “demand curve”) across multiple dimensions so can “plug in” any combination of levels of stimuli and predict if they’d choose a or b, Republican or Democrat…..or whatever your outcome is.

For the stats nerds, you must have non-negative degrees of freedom. In other words, for 8 estimates (“effect of this level of this stimulus is a degree of freedom”) you must have 8+ choices made by Mrs Smith. I’ve done this before. Incidentally because I know the rules well I know when I can break them – I use least squares regression on discrete outcomes (0/1). You are taught NEVER to do this. Indeed, unless you know what you are doing that is good advice. But under certain circumstances it tells you a lot. Not in terms of “anything you’ll quote in published results” but as a “quick and dirty” way of identifying individuals who have extreme preferences and “getting into the tails of the logistic distribution”. I write macros that “find me anyone whose choices are entirely dictated according to whether the hypothesised election involves elimination of incomce tax”….or whatever.

This “cheat” is a good way to test an absolutely key problem with DCEs – that tow people, one old, one young, might have IDENTICAL views, but the younger person seems to “respond more readily” to stimuli. I’ll group a bunch of 30 year olds and see their “beta estimates” are all quite skewed – they are strongly affect (ITS SEEMS) by the stimuli. I group a bunch of 70 year olds and see their “beta estimates” are much smaller. They SEEM to be less affected by the stimuli. But if the “pattern of betas” is the same I know both age groups in fact have the same preferences. …..what is merely happening is that the “error rate” is higher among the 70 year olds – something commonly seen. The average level of age-related cognitive impairment is higher and they “mistakenly” choose the “less preferred” manifesto more often, “diluting” their estimates. However, when the chips are down in a real world situation (like the voting booth) they might be no different from their younger brethren. They concentrate hard and take voting seriously. Suddenly that 60/40 split seen in the SP data among the “oldies” becomes 80/20 (just like the young’uns) in the “real” – Revealed Preference (RP) data.

Important point: in a Stated Preference (SP) study you deliberately “assume away” complicating factors. Frequencies then become very skewed. In a Revealed preference study (RP) – REALITY) – there is all sorts of cr$p that impinges upon your choices – frequencies tend to move towards 1/n where n is the number of response categories. So people often look “less sure” in real life with less skewed choice frequencies but this just reflects the fact life is complicated and people don’t “assume everything else is equal”. Remember this.

How do you Design a DCE in Practice?

Firstly, do qualitative work to find out what stimuli (attributes) matter to respondents (those which cause their choices to change, depending on what “level” the attribute takes).

Then, to design the “scenarios” you typically design a DCE in one of two ways. The first way is broadly Bayesian. You use prior knowledge as to what levels of the attributes “typically influence subjects like Mrs Smith”. You then construct a design – a series of pairs of manifestoes in the 1950s political example, one Democrat, one Republican – that vary these attributes in ways that “quickly and efficiently” establish what issues or groups of issues cause Mrs Smith to change her vote. Think of a bunch of dots in the (x,y) plane of a graph grouped around the best-fit line. You don’t bother to ask about manifestoes that are “far away” from Mrs Smith’s likely solution. This method gives you incredibly accurate estimates. BUT if your priors about Mrs Smith are wrong, you’ll get an answer that is “incredibly accurate but wrong”. It’s all very well to know how many millimetres the highest peak in the Appalachians is but that’s no bloody use since Everest is where you should be measuring!

The second way utilises “orthogonal designs”. These essentially vary the stimuli (attribute levels) so they keep moving at right angles so as to “cover all the utility space”. The downside is that some “pseudo-elections” involve manifestoes with policies that are pretty unrealistic. The upside is that “you’ve covered all possibilities”. You go looking for peaks in Bangladesh. Top tip – that’s dumb. But ultimately, because you are ALSO looking at the Nepalase plateau, you’ll find Everest. IS this better or worse? It depends. If you use Bayesian methods sensibly you can do at least as well as orthogonal designs. But they are a delicate power tool not to be used by the non-expert.

Consider the original iphone. Whilst a phone that utilised some form of interactive screen had been around, the iphone was genuinely revolutionary. Using “priors” based on the Blackberry or Windows phones using a wand would have been seriously dangerous. The iphone “thought outside the box”. That’s when an orthogonal design is best. Sometimes Mrs Smith doesn’t KNOW what she’d like until a hypothetical product is mocked up and shown on the screen. Bayesian methods would “stick to variations on what is available”. Orthogonal designs “imagine stuff” – OK, at the expense of asking dumb questions about Bangladesh mountains…..but whilst I value both approaches, I prefer orthogonal designs. Cause I like to investigate “new stuff”. Sometimes the “peak” is something new you never knew was there.

Reliability of DCE (Stated Preference) Answers

Do people answer DCE scenarios honestly? Some do, some don’t. But the important thing is that experienced researchers like me know how to spot liars, those who have “done their best but have a big dose of uncertainty” and those who “are pretty sure of their answers” (even if their honestly given answers happen to be wrong).

How do liars give themselves away? Well, a DCE is typically operating in 5+ dimensions. The number of attributes (stimuli) defines the number of dimensions. If you lie – either just to be annoying or to get through the study quickly and collect reward points in some online panel – then you must lie in a way that “works across all the dimensions”. Humans generally can’t do this. I can often spot liars a mile off just eyeballing data. If you’re going to fool me you must remember your answers in 5+ dimensions and use some sort of crib sheet to ensure your lies are consistent and your “preferred options” all conform to your “story”.

If you suddenly prefer a cellphone with no memory and elsewhere you have made clear you like a lot of storage to look at porn family pictures then it will raise a red flag in my data checks. Because you’ve HAD to sacrifice something else. And when you make totally mutually contradictory choices I get suspicious. (Quite apart from the fact I’m male and we are no angels and I’ve yet to see a male who likes an electronic device that can’t “store a lot of stuff”).

So if you want to mess with me you’ve got to do it in 5+ dimensions. And I practically guarantee you can’t. Which means I give your ID to the panel owner. They typically won’t take action straightaway. However, when University of X samples you for THEIR study, they see the “flag” stating “this person might take the p!ss”. If they find you’ve clicked through quickly to get points and money quickly by paying no attention to the questions or are p!ssing about you know what happens? The panel company deletes your points ($). Bad luck – you just got booted out for being a d!ckhead.

TOP TIP – IF YOU’RE GOING TO LIE, MAKE SURE YOU CAN DO IT IN 9 DIMENSIONS BECAUSE OTHERWISE WE’LL SPOT YOU. YOU’LL BE BLACKLISTED PRONTO.

So it’s really much more hassle to lie in 9 dimensions than to just answer honestly. So don’t do it. If I’m unsure about you I won’t suggest you be blacklisted but I WILL instruct my analysis program to place you in the group with “higher variance” (i.e. you are “less consistent” and might have a tendency to give mutually contradictory answers). You’ll have a lower contribution to the final solution – I am unsure if you’re just someone who might have an odd demand function I’ve never before encountered, someone cognitively challenged by the task or you are taking the p*ss.

What about “other” stated preference methods? Willingness-to-pay is the one that has a much longer pedigree, but also is much more contentious. People like Yves Smith of NakedCapitalism have expressed unease with WTP estimates. I happen to agree with her – despite the fact I’ve co-authored with one of the world’s top WTP experts, Richard Carson of UCSD whose figures were used in the ExxonValdez settlement. I just find it hard to believe most humans can “think up a valid number to value something”. Sorry Richard. However, I HAVE used WTP on occasion when I’ve had no other choice. It’s just one of those tools I think must be used very very carefully.

Where Next?

DCEs will, going back to our 1950s politics example, present Mrs Smith with a series of hypothetical, but REALISTIC, election scenarios. If you use orthogonal designs then unfortunately SOME scenarios might seem odd…..but with knowledge and experience you can minimise the number of silly scenarios. So, present hypothetical manifestos (one per party). The key policies of each manifesto are the “stimuli” which are varied. Ask her to state her preferred party each time. If, out of 16 pairs, she says “Democrat” every time then this is both good and bad. Good in that we can “segment her off” as a diehard Democrat. Bad in that she violates a KEY assumption of regression models – that she has an “error” term. She is “deterministic” not “probabilistic”. Thus keeping her in ANY regression model leads to bias (at least, any regression model based on limited dependent variable outcomes – our logit and probit models).

In terms of why you should be “netting her out if she says Democrat every time” please read the asterisked bit. A DCE, in ideal situation, should be doable for a SINGLE PERSON. Try running a logit/probit regression in which the dependent variable doesn’t vary. The program (Stata or whatever) will crash or give an error – no variation in dependent variable. Mrs Smith should not be in the regression! The “link function” is not designed to cope with 100% choices.

Why “Non-Experts” Make Horrid Mistakes

You might think adding Mr Patel, who does switch party depending on policy, solves things. I have lost count of the number of studies I refereed who thought this was “a solution”.

The thinking is, “OK 100% + 50%”: average = 75% and that can go into the link function to give an average utility, right? WRONG. The Link function can’t deal with 100% but 75% is fine. However, 100% from Mrs Smith is actually “infinite utility”. Do you know what you’ve actually done? You’ve asked “what’s the average of infinity and (say) ten?” Do you realise what a dumb question that is? IF you think taking averages involving infinity is OK you should transfer out of all mathematical subjects pronto. Go do drama or liberal arts.

You KNOW Mrs Smith votes Democrat no matter what. Use some family blogging common sense – this is what frustrates me so much about people using these models “cause they look cool”. Separate her out.

Results?

Results from a limited dependent variable model (logit/probit) are notoriously hard to interpret. Those who have seen my objections on NakedCapitalism will know what is coming. Here is a hopefully abbreviated version of the explanation.

Here’s the math. We can solve the likelihood function to find what mean and variance are “most likely” to produce the pattern of data we observed. Great. However in limited dependent variable models it hits a wall. Why? The mean and variance (technically a function of the variance) are multiplied by each other. So, let’s say you get the “solution” of “8”. If it is meanxvariance is this 8×1? Or 4×2? Or 2×4? Or 1×8? Or any of an infinite number of combinations.

The underlying problem is that we have ONE equation with TWO unknowns. You need a SECOND equation in order to split the mean and variance.

A Second Equation?

A second set of data can come from many sources. McFadden (who won the “Economics Nobel”) used real data from people’s decisions regarding public transport to do this – it “calibrated his model” (in other words, gave the right starting numbers upon which all the rest resided!). Using  real data – “ Revealed Preferences” – as opposed to the “Stated Preferences” discussed here is very useful.[2,5] RP data is good but is useless when you want to design something fundamentally new or “think outside the box”.

Where Next?

Attitudes have proven to be a salvation.[4] Segment people according to attitudes then the “uncalibrated voting data” you have can suddenly become highly predictive. It did for me at the 2017 General Election! 🙂

However, whenever I see a story with “rates” quoted for groups……be it “views on vaccination” or anything else, I roll my eyes. Why? Because whilst “patterns” (“definitely will vaccinate/probably will vaccinate/not sure/ probably won’t/definitely won’t) are somewhat stable, the initial calibration is often wrong. Thus, the media just yesterday in UK breathlessly pronounced that the proportion of “vaccine hesitant” people who did, in fact, get vaccinated was really high. 20 years of working in the field told me something……..the skewed proportions expressed in stated preference studies are almost NEVER reproduced in real life. I knew full well that the proportion of people refusing the vaccine would NOT BE SKY HIGH AS THE MEDIA SUGGESTED – in fact when “revealed preferences” became available, we’d see that whilst the “pattern” across the (say, 5) response categories might be maintained, the raw rates would stop being so skewed and would be “squashed” towards 20% each. In effect, this “sky high” number of anti-vaxxers would be squashed towards one fifth (all the response category proportions would get closer to 1/n where n is the number of response categories). It might even be the case that the pattern was changed and the “no, never” bunch would go below 20%. THAT is why you must “calibrate” stated preference data.

Dr Strange knew there was only one combination of stimuli that would lead to Thanos being destroyed in such a way that the “snap” was reversed. Unfortunately that combination required Tony Stark to die too. Sometimes the “hugely unlikely” is what is observed. It all depends on what is being changed around us.

TL;DR: Next time you see a study saying the “proportion of people who will engage in a behaviour x is some outlandishly highor low y%” you should immediately be wary.

*There is an absolutely CRUCIAL point to be made here regarding DCEs and which will head off (I hope) the criticism that I know a bunch of readers with statistical knowledge are just itching to say. The “standard errors” and hence confidence intervals around “utility estimates” or “party loyalty estimates” or whatever you are modelling in a logit/probit model, are calculated by looking at between-subject variation. DCEs are, at their core, NOT about between-subject variation. Their default assumption is that Mrs Smith’s “degree of certainty” is NOT the same as Mr Patel’s. In a traditional regression with a continuous outcome, all these different “types of variation” tend to come out in the wash and you’ll get an answer that might be imprecise…..but it’s unbiased.

In a limited dependent variable model (logit/probit), this heteroscedasticity (differences in “sureness”/”certainty”/”level of understanding”/”engagement with the task” between people) causes BIAS (not just a nuisance but an unaddressable problem). This is why in traditional logit models (probit too but logit are more common) like a clinical study with ONE endpoint per patient (cure/not……live/die…..) you can’t measure within-subject variation because you YOU ONLY HAVE ONE DATAPOINT! The “quantification of variability” is ENTIRELY BETWEEN-SUBJECT variation. Mathematical psychologists have shown for half a century that humans are inconsistent in many many areas. A well-designed DCE can measure this. I’ve done it, finding exactly what my predecessors found – lower levels of education, advanced age, etc, all cause you to be MORE INCONSISTENT (higher variance, manifesting as small beta estimates and choice frequencies closer to 1/n). Until you “net this out” you can’t aggregate Mrs Smith and Mr Patel’s data. THIS IS WHY PSEPHOLOGISTS GET ELECTION PREDICTIONS WRONG SO OFTEN.

 

STAY TUNED FOR PART 2: WHY INTERPRETATION OF DISCRETE DATA IS FREQUENTLY SO VERY VERY WRONG