Why Polls of Third-Party Candidate Support are Usually Wrong

Libertarian Candidate Gary Johnson’s website boasts that the third-party candidate is “polling nationally from 2.4% to 9% and various states have him polling up to 15%.” Like polls of the support of countless minor political candidates in the past, these numbers are almost certainly wrong, for an intriguing statistical reason.

Imagine a poll about a candidate named Smith who represents a major party and has, in truth, 40% support in the population. Imagine further that the poll is accurate 90% of the time. The other 10% of the time (due to leading questions, pollster error, voter confusion etc.) the poll predicts that someone who in fact will vote for Smith will not do so or that someone who will in fact vote for someone else will vote for Smith.

To keep the example simple, assume that the poll is only concerned with whether people will vote for Smith or not, where the non-Smith category includes voting for candidate Jones, Green, Wilson or not voting at all. Again for simplicity, assume the poll is of 100 voters, so the number of voters equals the percentage of predicted support for Smith. The table below shows what the poll will conclude.

The poll predicts that Smith will garner 42% of the vote (i.e., the poll will count 42 of the 100 voters it surveys as Smith supporters). These 42 votes are counted in the poll correctly in 36 cases and incorrectly in 6 cases (10% of the voters who really aren’t going to vote for Smith got counted as supporting Smith). The 42% result is wrong but it’s not bad at all as an estimate, whether you compare raw numbers (40% vs. 42% support) or compare the size of the error to the base rate of support (2% is only 5% of Smith’s true support of 40%).

The estimate is in the right ballpark because Smith’s level of support is near 50%. Indeed if his support were in fact 45% the poll would be even more accurate, despite its 10% error rate. In contrast, imagine that Smith’s true level of support is far from the midpoint, for example 10%. Here are the same poll results with same number of respondents and the same error rate.

The poll predicts 18% support for Smith. This is highly inaccurate whether one looks at the raw number of points (8 percentage points different than reality versus only 2 points when Smith was at 40% support) or compares the predicted versus the actual support (The predicted support of 18% is almost double the true support of 10%). Why did the same poll that proved fairly accurate for Smith at 40% prove so misleading when he was at 10%?

What is going on here is less complex than it may seem. It is simply harder to predict events that are unlikely than events which are likely. If a fair coin is being flipped over and over and you have to guess on which particular flip it will come up heads, you’ve got a 50-50 shot of winning the game. But if the same game is played with an unbalanced coin that comes up heads only 1% of the time, you will almost certainly not guess the right flip, even if you are allowed to play many times. Indeed, any system you might use to predict when the elusive heads result will occur will be less accurate over time than simply predicting that the coin will never come up heads no matter how many times it is flipped (that prediction is correct 99% of the time…this same phenomenon underlies why so many seemingly sage predictions about how politicians with certain characteristics can’t be elected president are in fact vapid).

The problem gets worse the further a candidate departs from 50% support. If Smith were actually at 1% support, the result of the 90% accurate poll would put Smith at a completely misleading 11%. More accurate polling helps but does not solve this problem: A 95% accurate poll of a candidate who actually has 1% support will still typically overstate his or her support by a factor of 5.

Finally, one might assume that averaging the results of different polls can surmount the challenges of estimating support for minor political candidates. That does indeed improve accuracy if you have a third-party candidate such as Theodore Roosevelt with a fairly high level of support, but not when a candidate is at the typical American third-party level of 1% or 2% support. A poll that overstates that candidate’s support can multiply it many fold as shown above, but a poll that understates it can’t go lower than 0%. The average of all error-ridden polls will thus still tend to overstate the candidate’s level of support because the errors have more room to grow in the upside than in the downside direction.

Author: Keith Humphreys

Keith Humphreys is the Esther Ting Memorial Professor of Psychiatry at Stanford University and an Honorary Professor of Psychiatry at Kings College London. His research, teaching and writing have focused on addictive disorders, self-help organizations (e.g., breast cancer support groups, Alcoholics Anonymous), evaluation research methods, and public policy related to health care, mental illness, veterans, drugs, crime and correctional systems. Professor Humphreys' over 300 scholarly articles, monographs and books have been cited over thirteen thousand times by scientific colleagues. He is a regular contributor to Washington Post and has also written for the New York Times, Wall Street Journal, Washington Monthly, San Francisco Chronicle, The Guardian (UK), The Telegraph (UK), Times Higher Education (UK), Crossbow (UK) and other media outlets.

14 thoughts on “Why Polls of Third-Party Candidate Support are Usually Wrong”

    1. Indeed, hat tip to the estimable Reverend, and in my own field to Meehl and Rosen.

  1. Another thing. In a first-past-the-post or plurality-wins election (which is how most state’s electoral votes are chosen), third parties tend to get squeezed out when it becomes ever more clear that they have no chance of winning. A protest pollster-response in July is easier that an an actual protest vote in the booth in November. I predict that Gary Johnson will receive well below 2.4 percent for that reason alone.

    1. Hi Ken: I agree that is in the soup as a source of error which could be a part of the phenomenon I describe in the post (i.e., as part of the 10% error rate in the hypothetical poll), but could also exert an independent influence in the production of inaccurate poll numbers for third-party candidates.

  2. There is another factor that comes in as well (although in theory pollsters should be accounting for it): strategic voting. Although the details depend on a respondent’s personal worldview and appetite for martyrdom of deniability, it’s generally a smart thing for even mild supporters of a third-party candidate to proclaim that support in polls, but to vote for their second-choice candidate in an election. (Indeed, even people who don’t support the third-party candidate, but dislike the two majors, have a personal incentive to be supporters for polling purposes.)

    Of course I’m still scarred by 1980, where some pollster or other asked as few days before the election, “Would you vote for John Anderson if you thought he could win?” and got something like 35% affirmatives…

  3. Sorry, but as far as I can tell, what you are arguing goes directly against what everyone is taught in stats: standard errors are less the greater the true ‘p’ is from .5.

    I think the specific flaw in the way you frame issue is that you assume a “90% accurate poll” is the same in all cases. This is very far from true.

    Go to a binomial calculator. If I’m doing things correctly, here is what you get: If someone’s true level of support is, say, 10%, and we do a sample of 100, the probability of getting more than 15 of your sample saying they support that person is about 4%; the probability of getting fewer than 5 is about 2.4%. Total probability of being 5% off, in either direction, is thus about 6.4%.

    In contrast, if someone’s true level of support is 50%, and we do a sample of 100, the probability of getting more than 55 of your sample saying they support that person is about 13.5%; the probability of getting fewer than 45 is also about 13.5%. Total probability of being 5% off, in either direction, is thus about 27%.

    If one, as is more typical, does a sample of 1000, both of those total probabilities (the 27% and the 6.4%) become much smaller, but it is the 6.4 number that declines most dramatically.

    1. To put it in the language of polls, assuming a sample of 1000 and 95% confidence level, the margin of error for the reported support for a candidate who actually has 50% support in the population as a whole is much larger than the margin of error for the reported support for a candidate who actually has 1% support in the population. For the former it is +/- 3%. For the latter it is +/- 0.6%.

      1. Oops. I now see that you wrote that the errors were because of “leading questions, pollster error, voter confusion etc”, not sampling variability, which is what I was assuming you meant. I need to read more carefully. Guess that is why I’m “confused”.

        1. Hi confused student: Even though it is a different type of error than I am discussing here, you are quite right to note that biased sampling is a common source of errors in polling, indeed it may be the most common source.

          1. I was assuming simple unbiased sampling variability, not biased sampling.

            I will note, though, that by assuming just 2 options when the whole issue is 3rd party candidates, your numbers overstate the problem a bit. If we have Adams, Brown, and Smith, with their proportion in the underlying population being 50%, 40%, and 10%, respectively, then if the poll misreads preferences 10% of the time, and we assume the misreadings are equally distributed among the 2 other candidates, then Smith’s final tally would be 13.5%, which is less than half as wrong as the 18% in your second table. And if Smith actually only had 1% in the population while Adams had 50% and Brown had 49%, the result of a 90% accurate poll would put Smith at just under 6% (not great, but better than the 11% you calculated), while a 95% accurate poll would put Smith at a bit less than 3.5% (again not great, but better than the almost 6% that I believe your method produces).

  4. There’s a lot of who-struck-John in the above comments, but what’s missing is any mention of the specific type of error that causes the extreme overstatement in the estimate of unlikely events from among multiple choices. That is the fact that the specific number of Type 1 (false positives) for the unlikely event in a paired choice is exactly equal to the number of Type 2 errors for the likely event.

    Suppose you offer two choices (A and B) to a large sample of respondents, and one of them (Choice A) is a small minority choice. Suppose you don’t differentiate between different Type 1 and Type 2 error rates for different choices. Rather, you simply assume that X percent of your responses will be incorrect. Then you may have a total number of false positives for your unlikely choice A, that will greatly outweigh the true positives, simply because your 10% of Type 2 errors for Choice B will represent a large number of false positives for Choice A.

Comments are closed.