Pollsters consistently overestimated the share of votes for President-Elect Joe Biden and Democratic Senatorial candidates in the 2020 election. While polls correctly predicted Biden’s victory, polling averages overestimated his margins in states he won and underestimated Donald Trump’s margins in states that Trump won.
The errors were not the result of random variations in sampling. Instead, sample selection consistently underrepresented Republican voters. The underestimate of Republican support in the Presidential election repeats the polling error in 2016 when most polls predicted an electoral majority for Hillary Clinton.
Although the overall shift in votes between 2016 and 2020 was small, in the Presidential Election, Joe Biden flipped enough states from the Republican column to win the 2020 Presidential election. Data from Monday, November 9th showed Biden ahead in Georgia, Arizona, Michigan, Nevada, Pennsylvania, and Wisconsin, with media projections showing him with enough of a margin in Michigan, Wisconsin, Nevada, and Pennsylvania to declare him the winner. But, in each of the competitive states, polls overestimated Biden’s margin percentage. The data here is from fivethirtyeight.com, which provided estimates of each candidate’s polling percentages based on averages of recent polls.
In Ohio, the fivethirtyeight.com polling average estimated that Trump would win the state by 0.6%. His actual margin was 8.1%. In Florida, polls predicted a margin of 2.5% in favor of Biden. Instead, Trump won the state by 3.4%. In Wisconsin, the overestimate of Biden’s edge was 7.6%. For the ten competitive states, polls overestimated Biden’s share by 4.5% on average.
Fivethirtyeight.com attempts to improve poll estimates by creating a complex model. “To calculate what percentage of the vote each candidate is forecasted to get…, our model starts with the weighted polling average [which removes some polls they consider unreliable and overweight others that are more accurate] and then factors in economic conditions, demographics, uncertainty and how states with similar characteristics are forecasted to vote.”
Another popular political site, realclearpolitics.com provides an estimate that is a simple average of the most recent polls in each state. That approach also resulted in polling averages that consistently overestimated Biden’s share. But did the more sophisticated model used by fivethirtyeight.com do a better job of estimating the vote margins than did the simple averages used by realclearpolitics.com?
The data shows that in eight of ten states – all but Georgia and Texas – the simple Real Clear Politics average had a smaller margin error than did the fivethirtyeight.com model. The complicated model used by fivethirtyeight.com was likely overspecified. By adding more variables to explain the election’s outcome, an overspecified model may “explain” variations in past elections that are actually the result of random factors that have no predictive value for future elections. See this explanation from a company that creates statistical tools, “An overfit model is one that is too complicated for your data set. When this happens, the regression model becomes tailored to fit the quirks and random noise in your specific sample rather than reflecting the overall population. If you drew another sample, it would have its own quirks, and your original overfit model would not likely fit the new data.”
The polling sample bias in favor of Democrats was also present in Senate races. The table below shows that poll estimates consistently overestimated the Democratic share of the vote. On the chart, blue columns are poll estimates, orange columns are actual vote margins. Note that the blue (actual vote) columns are in almost every case taller than the (projected) orange columns in cases where the margins had a positive value, indicating a margin in favor of Republicans. In the case of margins with negative values, the shorter blue (actual) columns reflected the fact that Democratic candidates won the actual vote by smaller margins than they did in the pre-election polls. In only one of the 32 contested elections for Senate was the actual Democratic victory margin larger as of Monday, the 10th of November, than the margin predicted by polls (Senator Hickenlooper in Colorado).
At best, with representative samples, electoral polls can produce a prediction of candidate voting percentages within a margin of error that depends on sample size. When websites like fivethirtyeight.com and realclearpolitics.com provide aggregated polling estimates, one purpose is to provide better estimates of the likely outcome, because the combined polls provide larger sample sizes. In that respect, these efforts resemble the meta-analyses that biological scientists undertake in attempting to understand factors associated with disease prevalence, or treatment effectiveness.
The 2020 election, like that in 2016, saw consistent sample biases in favor of Democratic candidates. In the Presidential election margin and in the Senatorial contests, the difference in the margins between the biased estimates and the actual votes was too small to result in a wrong prediction of which candidate would win in only one case – in North Carolina, where Tom Tillis, the Republican candidate won, despite a predicted loss in the polls. In that respect, 2020 differed from 2016, when polling averages predicted a win for Hillary Clinton.
Why were polling samples biased?
The goal of polling organizations is to draw representative samples of voters. But doing so is not simple. To create representative samples, pollsters attempt to match the demographic characteristics of voters that relate to voting behavior. So, respondents are questioned about factors like race, ethnicity, income, age, and other factors that have in the past been relating to voting preferences.
After the 2016 election, many analyses contended that polls missed Donald Trump’s victory partly because they oversampled voters with college degrees. Other factors included a late shift in preferences towards Trump and the “shy voters” who did not reveal their preference for Trump in pre-election polling.
In 2020, voting preferences were quite stable – polls did not show a significant shift in preferences during the campaign period.
Estimating the turnout of voters in demographic groupings is difficult, because the percentage of people who vote in particular groups varies from year to year, depending on several significant factors including the candidates running and external trends. Candidates Biden and Trump motivated voters in different ways than did Barack Obama and John McCain. Voting turnout among specific socioeconomic groups differs from year to year, but polling organizations have to make best guesses about turnout in order to arrive at their voting estimates.
At this point, we do not know whether pollsters’ turn out models represented actual voters demographic characteristics accurately. We know that polls attempted to correct the known overrepresentation of voters with college degrees seen in the prior election, but in the end, modeling the election for the turnout of specific demographic groups involves a set of educated guesses. In Florida, the large polling miss and the significant shift of Hispanic voters towards Donald Trump raises the question of whether they were adequately represented in poling samples.
Another factor that could be involved in polling error is the differing turnout operations of the Republican and Democratic presidential campaigns. During the current campaign, the Trump organization reportedly placed more effort on direct voter turnout activities than did the Biden campaign. Trump’s operation may have been more effective than Biden’s in turning out voters predisposed to vote for him.
Finally, a number of Trump supporters may well have been “shy voters.” Polling has become more difficult over time. Response rates are declining. People who are bombarded by spam phone calls now can simply refuse to participate by declining to answer calls that come from numbers that are not known on caller ID. In the current, highly charged political environment, more people appear to be reluctant to disclose their political preferences.
As a candidate, Donald Trump has consistently projected distrust of governmental and media institutions. He has often characterized people in the media and polling organizations as “liars.” He refers to government institutions that he believes are hostile as the “deep state.” Trump’s denigration of these institutions has likely increased distrust of them by his supporters. Differential polling participation rates between Trump and Biden supporters could have occurred within the demographic groups that pollsters use to model the electorate.
These impediments to the acquisition of representative samples have resulted in biased polling results in the 2016 and 2020 elections. While the sampling errors did not result in polling errors in 2020 like the botched call in 2016 that led to the prediction that Hillary Clinton would win the presidency, we remain in an environment where the electorate is closely divided.
Attempts to improve the predictive quality of polling by organizations like fivethirtyeight.com did not do so. In fact, fivethirtyeight.com‘s estimates were less accurate than simple polling averages, probably because it used an overspecified model.
The media’s continued emphasis on polling to predict the outcome of Presidential elections undermines public confidence in them and in the electoral process. Beyond the inherent inability to accurately predict outcomes in closely contested races that results from the known variability of samples, (predictions that are plus or minus three or four points in most cases), we know that polling organizations have in the recent past produced voter samples that are unrepresentative of the actual electorate, leading to consistent biases that lead to predictions of voting outcomes that favor Democratic candidates.
When newspapers, news networks, and internet sites publish stories that indicate that a candidate is “leading” in closely contested elections despite the uncertainty associated with sampling outcomes, and the poor record of polling organizations in creating representative samples of the electorate, they are doing a disservice to their audiences and to public confidence in elections.