ANNOUNCEMENT

"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."

12/17/18

Nate Silver, the Democratic primaries, and the trouble with Likert scores

Yesterday, Nate Silver tweeted out a table purporting to calculate Likert scores for a list of potential contenders in the Democratic primaries:

This ranking, he added, "isn't that far from how I'd rate the candidates chances." But is this actually a plausible way of assessing a candidate's chances? And did he even calculate the Likert score correctly? Spoiler: no, and no.

Calculating the Likert score

The first problem here is that Silver begins his calculation by casually omitting a huge fraction of the data: "voters who didn't know or had no opinion about the candidate". Even if we assume that none of these voters knew who the candidate was, this immediately introduces significant uncertainty. For example, by Silver's accounting, only 17% of the voters had an opinion about Andrew Yang. Can we really make an apples-to-apples comparison between him and Hillary Clinton, who received favorable/unfavorable answers from 94% of the voters?

Worse (and this is where Silver really gets into trouble) "didn't know or had no opinion about the candidate" is not an option on the survey. The actual data he is throwing away is the answer "not sure" - and this can include voters who have neutral or ambivalent feelings about the candidate. This is a response you can include in a Likert score! And if you include it, the rankings change:

Here, I've ranked them on a 5-point Likert scale; I've also made a proportional adjustment to Silver's 4-point Likert scale, just for the sake of comparison. Admit that some candidates may just inspire neutral or ambivalent responses, and suddenly O'Rourke, Harris, Booker and Klobuchar take massive hits to their Likert score.

Of course, the most realistic scenario is that some "not sure" answers were ambivalent or neutral, and some expressed genuine ignorance about the candidate. But granting that, the appropriate conclusion is that the data we're working with here just isn't granular enough to calculate legitimate Likert scores.


What does this tell us about electability?

Silver thinks that Likert scores give us some insight into the "chances" that a candidate could win. There is, of course, a pretty simple way to test this: just look at what they predicted in previous elections. Here's one I put together based on early favorability polling of the Democratic candidates in 2016:
This is obviously wrong on multiple levels. Sanders, though he had the highest Likert score, was never favored to win. Clinton was a significantly stronger candidate than Biden, and both were certainly much stronger than Jim Webb. Likert scores could not predict the massive institutional advantages that Clinton would bring to the race, they could not predict the combination of political pressure and personal tragedy that would force Biden to drop out, and they could not predict the way that Sanders' populist message would undercut Webb's campaign and leave O'Malley as the only third option. (Admittedly, it seems to have got Chafee right.)

I don't recall Silver posting a table like this in 2016, for obvious reasons.