by Joe Angert, Kieran Burkhardt, Sam Butler, Jane Hepp, Gia Jackson, Sydney Jones, Komal Kumar, Poorabi Nanda, Samuel Schlimpert, Sophie Thorpe, and William Bianco
As polling averages shifted towards Republicans in the closing weeks of the 2022 midterms, one interpretation was that Americans were reverting to the usual pattern of favoring out-party candidates. Other observers argued that voter intentions were not changing and that the shift was driven by the release of a disproportionate number of pro-Republican polls – an argument supported by the unexpectedly favorable results for Democratic candidates on Election Day.
Our analysis of close Senate races in the 2018 and 2022 elections identifies clear over-time changes in the types of polls incorporated into polling averages. While the two elections had similar numbers of polls at different points in time, there were sharp differences in the types of polls being released. In 2018, nonpartisan pollsters dominated the closing weeks of the campaign. In 2022, a strong majority of polls were from partisan organizations, with an almost 2:1 ratio of pro-Republican to pro-Democratic. These results suggest that polling aggregations, a key tool for analyzing modern campaigns, may be susceptible to over-time variation in the constituents of these averages.
To build this conclusion, we downloaded data from FiveThirtyEight on the polls conducted in close Senate elections in 2018 and 2022 during the last ten weeks (70 days) of each campaign.[1] Across the two elections, there were a total of 68 pollsters and 705 polls. For each poll, our data includes the release date, the sponsoring organization, and the firm conducting the poll. We used the latter two variables, along with an investigation of organization and pollster websites (including self-reports of ideology and party affiliations of candidates they worked for), to code each poll as nonpartisan, pro-Republican, or pro-Democratic. For example, Data for Progress was coded as pro-Democratic, the Marquette Law School Poll as nonpartisan, and the Trafalgar Group as pro-Republican.
Figure 1 shows the over-time distribution in the total number of polls released for the selected Senate races in the 2018 and 2022 general election campaigns, along with best-fit logarithmic trend lines. In the aggregate, the two elections are very similar, with polls being released more frequently as Election Day approached.
Figure 1. Weekly Poll Releases In Close Senate Races,
2018 and 2022 Midterms
A very different picture emerges if we disaggregate the weekly totals in Figure 1 into pro-Democratic, nonpartisan, and pro-Republican polls. The left-hand plot in Figure 2 shows that as the 2018 campaign progressed, the percentage of nonpartisan polls increased while the percentage of Republican and Democratic polls decreased (the trend lines are again logarithmic). Moreover, at the end of the campaign, the percentage of Republican and Democratic polls were roughly the same.
These plots show that in 2022, the percentage of nonpartisan polls decreased in the last weeks of the campaign, as did the percentage conducted by pro-Democratic pollsters. However, the percentage of pro-Republican polls increased dramatically, so during the last week of the campaign, there were more than twice as many Republican polls as Democratic polls – and almost as many Republican polls as nonpartisan polls.
To be clear, we are not alleging a conspiracy among Republican pollsters to influence campaign narratives. It is certainly possible that pollsters of all stripes made independent decisions about poll timing, so the differences we see are chance correlations. Moreover, insofar as the disproportionate release of Republican polls did suggest an impending red wave in 2022, at least part of the responsibility lies with nonpartisan and pro-Democratic pollsters, whose polling cadence in the closing weeks of the campaign did not match their Republican counterparts.
Figure 2. Distribution of Poll Types
Even so, our results raise new concerns about the use of polling averages to assess campaign dynamics. A shift from one week to another may reflect changes in underlying voter preferences but can also reflect differences in the types of polls used to construct polling averages. This concern is particularly true for sites that aggregate polls without controlling for house effects (pollster-specific corrections for systematic partisan lean). Over-time differences in the mix of released polls, as in 2022, could cause significant shifts in simple averages even as voter preferences remain the same.
Our results are also salient for aggregators who use pollster house effects to adjust raw polling data. In theory, these corrections remove poll-specific partisan biases, allowing polling averages to be compared week-to-week, even given changes in the types of polls being released. However, in most cases, aggregators use black-box models to estimate and incorporate house effects, making it impossible to assess the viability of this strategy. In a race where there are about the same proportion of pro-Democratic and pro-Republican polls (or if nonpartisan polls dominate), incorporating inaccurate house effects is probably innocuous, as errors are likely to cancel each other out. But under a 2022 scenario, where partisan polling is dominated by one side, inaccurate house effects estimates could cause more harm than good. Our results highlight the need for greater transparency about house effects to determine whether this technique corrects for over-time variation in poll types – or whether it introduces new problems.
[1] The 2018 states were FL, AZ, WV, TX, MT, MO, NV, and IN, while the 2022 states were GA, NC, OH, AZ, WI, PA, NH, and NV. Poll release dates were estimated from the closing date of the poll, with press releases used to validate this assumption.
Commentaires