# As state poll results show ties in Trump-Harris race, is it voters or pollsters?

Recent polls in the seven key swing states show a surprisingly tight presidential election: 124 of the last 321 polls taken in those states — nearly 39% — show margins of 1 percentage point or less.

In fact, the state polls show not just an astonishingly close race, but an improbably close one. Even in a truly tied election, the randomness associated with polling would generate more varied and less clustered results—unless the state polls and poll averages are artificially close due to decisions made by pollsters.

The results of a poll depend on the opinions of the voters and the decisions of the pollsters. Decisions about how to weight polls to match the expected composition of the electorate can move the results of a vote up to 8 points. This is true even if pollsters make perfectly reasonable decisions about how to weight their survey data, as pollsters have been forced to consider new methods and ideas for weighting and addressing declining response rates following poll misses in 2016 and 2020.

But the fact that so many polls report exactly the same margins and results raises a troubling possibility: that some pollsters make adjustments in such similar ways that these polls cause the results to pile up, creating a potential illusion of security – or that some pollsters even look to other people’s results to guide their own (ie “herding”). If so, the artificial similarity between polls can create a false impression that may not pan out on election day. We could easily have a very close election. But there is also a significant chance that one candidate or the other could sweep all the swing states and win the presidency fairly comfortably, at least compared to the evenly balanced picture in the polls.

## What should we see in a perfect polling world due to randomness?

In a world perfect for polling—a researcher’s paradise where every voter can be contacted and every contacted voter responds—we can use math to calculate how much variation there must be due to the fact that voters are randomly selected to take a poll.

If a race in this world was truly tied 50%-50%, the polls wouldn’t all produce results that split 50%-50%. Imagine if pollsters in this world conducted 100 identical surveys of 863 randomly selected voters (that’s the average sample size of this year’s swing state polls). The results in 95 of those polls would show candidates getting support somewhere in the 46.7% to 53.3% range — even though we know in this imaginary world that the race is actually tied at 50%. The other five polls would show the candidates earning something even larger or smaller outside that range.

This variation is known as the “margin of error” in a poll – ie. how much randomly selected voters who always respond can affect a poll’s estimate of a candidate.

Because each candidate’s support varies randomly, these polls predict a margin in a tied race ranging from -6.6 to +6.6 for 95 out of 100 polls (with even larger margins for the other five).

It’s important to highlight that the range of margins we can expect in a tied race (and in a perfect voting world) is much larger than the margins in the swing states in 2020. *Even under ideal conditions for voting*it is difficult, if not impossible, for a poll to be very informative about who is leading a close race. And this is arguably a lower bound for what we should observe in the messier real world, where polls vary in how respondents are selected, contacted and weighted to match the voters the polls think will turn out in 2024.

We can also calculate what proportion of 863-person polls we can expect to show different margins in a truly tied race. Rounded to the nearest percentage point, about 11% of polls in a tied race should show a tie.

This means that almost 9 out of 10 polls of a tied race should not actually show a tie due to chance and margin of error.

About 32% of polls should have a margin of 1 point or closer, 55% should have a margin of 2 points or closer, and 69% should have a margin of 3 points or closer. Even in a 50-50 race, about 10% of the polls should have more than a 5-point margin due to inherent randomness – almost the same percentage showing a (rounded) tie!

With enough polls, the predicted margin should also look like a “bell curve” normal distribution—with a similar number of polls showing both candidates leading.

## What are we seeing in swing state polls?

Actual swing state polls show far less variation than the benchmarks we would expect in a perfect polling world. Across the 321 polls in the seven swing states, only 9 polls (3%) report a margin of more than 5 points. Even if every race were tied — which they aren’t — we’d still expect to see about 32 of the 321 polls by more than a 5-point margin due to chance.

Visualizing how the reported voting margins compare to what we would expect in a perfect polling world strongly suggests “herding” of fluctuating state voting margins around the statewide voting averages. In those 321 state polls, 69 of them (21%) report an exact tie, and 124 polls (39%) report a margin of 1 percentage point or less. Both of these numbers are roughly double what we would expect in a perfect polling world, where the only source of variation is the random sample of voters who respond.

Pennsylvania is perhaps the most troubling state. Fully 20 out of 59 polls there (34%) show an exact tie and 26 (44%) show a margin of 1 point or less. And while there is a 15% chance that a true tied race could produce a poll with more than a 5-point margin due to chance, we see only 2 out of 59 Pennsylvania polls (3.3%) with a margin greater than 5 points.

Even where polling results are not as tightly clustered as in Arizona, Michigan and Wisconsin, there are still far more polls than we would expect around the polling average and too few polls by large margins.

## What is happening?

The concentrated margins we see in fluctuating state polls likely reflect one of two possibilities.

One possibility is that pollsters can sometimes adjust a poll result that looks “odd” to them by choosing a weighting scheme that produces results closer to the results of other polls. There appear to be strong incentives for risk-averse pollsters to do so. Unless a pollster conducts many polls and they can be sure that the effect of chance averages out, they may fear the reputational and financial costs of getting a result wrong due to chance, since pollsters are judged on their polling accuracy.

A risk-averse pollster who gets a 5-point margin in a race they believe is tied may choose to “adjust” the results to something closer to what other polls show, lest their dissenting poll affect their reputation in negatively relative to competitors.

Another more likely possibility is that some of the tools pollsters use in 2024 to address the 2020 polling problems, such as weighting by partisanship, past voting numbers, or other factors, may even out the differences and reduce the variability in reported poll results. The effect of such decisions is subtle but important because it means that the equality of opinion polls is driven by the decisions of the pollsters rather than the voters.

And if those assumptions are wrong, something that won’t be known until after the election, then the risk of a potentially significant polling error increases as the variability in different polls decreases.

## Why this matters

The fact that so many swing state polls report similarly close margins is a problem because it raises questions about whether the polls are even in these races because of voters or pollsters. Is 2024 going to be as close as 2020 because our politics are stable, or are the 2024 polls similar to the 2020 results only because of the decisions the state polls make? The fact that the polls seem closer together than what we would expect in a perfect polling world raises serious questions about the second scenario.

The reported polls and voting averages create a consensus that the race will be very close and we will likely see a result similar to 2020. Maybe that is true. It would be wonderful for pollsters to successfully address the concerns of 2016 and 2020 in 2024.

But the fact that the polls all report such similar margins doesn’t necessarily make those margins more likely to represent the final result. In fact, it raises the possibility that the results of the election could unexpectedly turn out to be different than the knife-edge narrative, the cluster of state polls and poll averages suggest.